The Therapy Process Observational Coding System for Child Psychotherapy Strategies Scale
ERIC Educational Resources Information Center
McLeod, Bryce D.; Weisz, John R.
2010-01-01
Most everyday child and adolescent psychotherapy does not follow manuals that document the procedures. Consequently, usual clinical care has remained poorly understood and rarely studied. The Therapy Process Observational Coding System for Child Psychotherapy-Strategies scale (TPOCS-S) is an observational measure of youth psychotherapy procedures…
ERIC Educational Resources Information Center
Fjermestad, Krister W.; McLeod, Bryce D.; Heiervang, Einar R.; Havik, Odd E.; Ost, Lars-Goran; Haugland, Bente S. M.
2012-01-01
The aim of this study was to examine the factor structure and psychometric properties of an observer-rated youth alliance measure, the Therapy Process Observational Coding System for Child Psychotherapy-Alliance scale (TPOCS-A). The sample was 52 youth diagnosed with anxiety disorders ("M" age = 12.43, "SD" = 2.23, range = 15;…
Coding Manual for Continuous Observation of Interactions by Single Subjects in an Academic Setting.
ERIC Educational Resources Information Center
Cobb, Joseph A.; Hops, Hyman
The manual, designed particularly for work with acting-out or behavior problem students, describes coding procedures used in the observation of continuous classroom interactions between the student and his peers and teacher. Peer and/or teacher behaviors antecedent and consequent to the subject's behavior are identified in the coding process,…
NASA Astrophysics Data System (ADS)
Villiger, Arturo; Schaer, Stefan; Dach, Rolf; Prange, Lars; Jäggi, Adrian
2017-04-01
It is common to handle code biases in the Global Navigation Satellite System (GNSS) data analysis as conventional differential code biases (DCBs): P1-C1, P1-P2, and P2-C2. Due to the increasing number of signals and systems in conjunction with various tracking modes for the different signals (as defined in RINEX3 format), the number of DCBs would increase drastically and the bookkeeping becomes almost unbearable. The Center for Orbit Determination in Europe (CODE) has thus changed its processing scheme to observable-specific signal biases (OSB). This means that for each observation involved all related satellite and receiver biases are considered. The OSB contributions from various ionosphere analyses (geometry-free linear combination) using different observables and frequencies and from clock analyses (ionosphere-free linear combination) are then combined on normal equation level. By this, one consistent set of OSB values per satellite and receiver can be obtained that contains all information needed for GNSS-related processing. This advanced procedure of code bias handling is now also applied to the IGS (International GNSS Service) MGEX (Multi-GNSS Experiment) procedure at CODE. Results for the biases from the legacy IGS solution as well as the CODE MGEX processing (considering GPS, GLONASS, Galileo, BeiDou, and QZSS) are presented. The consistency with the traditional method is confirmed and the new results are discussed regarding the long-term stability. When processing code data, it is essential to know the true observable types in order to correct for the associated biases. CODE has been verifying the receiver tracking technologies for GPS based on estimated DCB multipliers (for the RINEX 2 case). With the change to OSB, the original verification approach was extended to search for the best fitting observable types based on known OSB values. In essence, a multiplier parameter is estimated for each involved GNSS observable type. This implies that we could recover, for receivers tracking a combination of signals, even the factors of these combinations. The verification of the observable types is crucial to identify the correct observable types of RINEX 2 data (which does not contain the signal modulation in comparison to RINEX 3). The correct information of the used observable types is essential for precise point positioning (PPP) applications and GNSS ambiguity resolution. Multi-GNSS OSBs and verified receiver tracking modes are essential to get best possible multi-GNSS solutions for geodynamic purposes and other applications.
SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions
Poeter, Eileen P.; Hill, Mary C.
2008-01-01
This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.
Imitation Learning Errors Are Affected by Visual Cues in Both Performance and Observation Phases.
Mizuguchi, Takashi; Sugimura, Ryoko; Shimada, Hideaki; Hasegawa, Takehiro
2017-08-01
Mechanisms of action imitation were examined. Previous studies have suggested that success or failure of imitation is determined at the point of observing an action. In other words, cognitive processing after observation is not related to the success of imitation; 20 university students participated in each of three experiments in which they observed a series of object manipulations consisting of four elements (hands, tools, object, and end points) and then imitated the manipulations. In Experiment 1, a specific intially observed element was color coded, and the specific manipulated object at the imitation stage was identically color coded; participants accurately imitated the color coded element. In Experiment 2, a specific element was color coded at the observation but not at the imitation stage, and there were no effects of color coding on imitation. In Experiment 3, participants were verbally instructed to attend to a specific element at the imitation stage, but the verbal instructions had no effect. Thus, the success of imitation may not be determined at the stage of observing an action and color coding can provide a clue for imitation at the imitation stage.
Progressive changes in non-coding RNA profile in leucocytes with age
Muñoz-Culla, Maider; Irizar, Haritz; Gorostidi, Ana; Alberro, Ainhoa; Osorio-Querejeta, Iñaki; Ruiz-Martínez, Javier; Olascoaga, Javier; de Munain, Adolfo López; Otaegui, David
2017-01-01
It has been observed that immune cell deterioration occurs in the elderly, as well as a chronic low-grade inflammation called inflammaging. These cellular changes must be driven by numerous changes in gene expression and in fact, both protein-coding and non-coding RNA expression alterations have been observed in peripheral blood mononuclear cells from elder people. In the present work we have studied the expression of small non-coding RNA (microRNA and small nucleolar RNA -snoRNA-) from healthy individuals from 24 to 79 years old. We have observed that the expression of 69 non-coding RNAs (56 microRNAs and 13 snoRNAs) changes progressively with chronological age. According to our results, the age range from 47 to 54 is critical given that it is the period when the expression trend (increasing or decreasing) of age-related small non-coding RNAs is more pronounced. Furthermore, age-related miRNAs regulate genes that are involved in immune, cell cycle and cancer-related processes, which had already been associated to human aging. Therefore, human aging could be studied as a result of progressive molecular changes, and different age ranges should be analysed to cover the whole aging process. PMID:28448962
Numerical modeling of the fracture process in a three-unit all-ceramic fixed partial denture.
Kou, Wen; Kou, Shaoquan; Liu, Hongyuan; Sjögren, Göran
2007-08-01
The main objectives were to examine the fracture mechanism and process of a ceramic fixed partial denture (FPD) framework under simulated mechanical loading using a recently developed numerical modeling code, the R-T(2D) code, and also to evaluate the suitability of R-T(2D) code as a tool for this purpose. Using the recently developed R-T(2D) code the fracture mechanism and process of a 3U yttria-tetragonal zirconia polycrystal ceramic (Y-TZP) FPD framework was simulated under static loading. In addition, the fracture pattern obtained using the numerical simulation was compared with the fracture pattern obtained in a previous laboratory test. The result revealed that the framework fracture pattern obtained using the numerical simulation agreed with that observed in a previous laboratory test. Quasi-photoelastic stress fringe pattern and acoustic emission showed that the fracture mechanism was tensile failure and that the crack started at the lower boundary of the framework. The fracture process could be followed both in step-by-step and step-in-step. Based on the findings in the current study, the R-T(2D) code seems suitable for use as a complement to other tests and clinical observations in studying stress distribution, fracture mechanism and fracture processes in ceramic FPD frameworks.
Role of Symbolic Coding and Rehearsal Processes in Observational Learning
ERIC Educational Resources Information Center
Bandura, Albert; Jeffery, Robert W.
1973-01-01
Results were interpreted supporting a social learning view of observational learning that emphasizes contral processing of response information in the acquisition phase and motor reproduction and incentive processes in the overt enactment of what has been learned. (Author)
Geomagnetic Storm Impact On GPS Code Positioning
NASA Astrophysics Data System (ADS)
Uray, Fırat; Varlık, Abdullah; Kalaycı, İbrahim; Öǧütcü, Sermet
2017-04-01
This paper deals with the geomagnetic storm impact on GPS code processing with using GIPSY/OASIS research software. 12 IGS stations in mid-latitude were chosen to conduct the experiment. These IGS stations were classified as non-cross correlation receiver reporting P1 and P2 (NONCC-P1P2), non-cross correlation receiver reporting C1 and P2 (NONCC-C1P2) and cross-correlation (CC-C1P2) receiver. In order to keep the code processing consistency between the classified receivers, only P2 code observations from the GPS satellites were processed. Four extreme geomagnetic storms October 2003, day of the year (DOY), 29, 30 Halloween Storm, November 2003, DOY 20, November 2004, DOY 08 and four geomagnetic quiet days in 2005 (DOY 92, 98, 99, 100) were chosen for this study. 24-hour rinex data of the IGS stations were processed epoch-by-epoch basis. In this way, receiver clock and Earth Centered Earth Fixed (ECEF) Cartesian Coordinates were solved for a per-epoch basis for each day. IGS combined broadcast ephemeris file (brdc) were used to partly compensate the ionospheric effect on the P2 code observations. There is no tropospheric model was used for the processing. Jet Propulsion Laboratory Application Technology Satellites (JPL ATS) computed coordinates of the stations were taken as true coordinates. The differences of the computed ECEF coordinates and assumed true coordinates were resolved to topocentric coordinates (north, east, up). Root mean square (RMS) errors for each component were calculated for each day. The results show that two-dimensional and vertical accuracy decreases significantly during the geomagnetic storm days comparing with the geomagnetic quiet days. It is observed that vertical accuracy is much more affected than the horizontal accuracy by geomagnetic storm. Up to 50 meters error in vertical component has been observed in geomagnetic storm day. It is also observed that performance of Klobuchar ionospheric correction parameters during geomagnetic storm days cannot guarantee the improving accuracy due to the ionospheric scintillation.
Subramanian, Amarnath; Westra, Bonnie; Matney, Susan; Wilson, Patricia S; Delaney, Connie W; Huff, Stan; Huff, Stanley M; Huber, Diane
2008-11-06
This poster describes the process used to integrate the Nursing Management Minimum Data Set (NMMDS), an instrument to measure the nursing context of care, into the Logical Observation Identifier Names and Codes (LOINC) system to facilitate contextualization of quality measures. Integration of the first three of 18 elements resulted in 48 new codes including five panels. The LOINC Clinical Committee has approved the presented mapping for their next release.
Shahraz, Saeid; Lagu, Tara; Ritter, Grant A; Liu, Xiadong; Tompkins, Christopher
2017-03-01
Selection of International Classification of Diseases (ICD)-based coded information for complex conditions such as severe sepsis is a subjective process and the results are sensitive to the codes selected. We use an innovative data exploration method to guide ICD-based case selection for severe sepsis. Using the Nationwide Inpatient Sample, we applied Latent Class Analysis (LCA) to determine if medical coders follow any uniform and sensible coding for observations with severe sepsis. We examined whether ICD-9 codes specific to sepsis (038.xx for septicemia, a subset of 995.9 codes representing Systemic Inflammatory Response syndrome, and 785.52 for septic shock) could all be members of the same latent class. Hospitalizations coded with sepsis-specific codes could be assigned to a latent class of their own. This class constituted 22.8% of all potential sepsis observations. The probability of an observation with any sepsis-specific codes being assigned to the residual class was near 0. The chance of an observation in the residual class having a sepsis-specific code as the principal diagnosis was close to 0. Validity of sepsis class assignment is supported by empirical results, which indicated that in-hospital deaths in the sepsis-specific class were around 4 times as likely as that in the residual class. The conventional methods of defining severe sepsis cases in observational data substantially misclassify sepsis cases. We suggest a methodology that helps reliable selection of ICD codes for conditions that require complex coding.
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Synchrony and motor mimicking in chimpanzee observational learning
Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew
2014-01-01
Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function. PMID:24923651
Synchrony and motor mimicking in chimpanzee observational learning.
Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew
2014-06-13
Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.
Seeing the mean: ensemble coding for sets of faces.
Haberman, Jason; Whitney, David
2009-06-01
We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces-a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis. (c) 2009 APA, all rights reserved.
Social priming of hemispatial neglect affects spatial coding: Evidence from the Simon task.
Arend, Isabel; Aisenberg, Daniela; Henik, Avishai
2016-10-01
In the Simon effect (SE), choice reactions are fast if the location of the stimulus and the response correspond when stimulus location is task-irrelevant; therefore, the SE reflects the automatic processing of space. Priming of social concepts was found to affect automatic processing in the Stroop effect. We investigated whether spatial coding measured by the SE can be affected by the observer's mental state. We used two social priming manipulations of impairments: one involving spatial processing - hemispatial neglect (HN) and another involving color perception - achromatopsia (ACHM). In two experiments the SE was reduced in the "neglected" visual field (VF) under the HN, but not under the ACHM manipulation. Our results show that spatial coding is sensitive to spatial representations that are not derived from task-relevant parameters, but from the observer's cognitive state. These findings dispute stimulus-response interference models grounded on the idea of the automaticity of spatial processing. Copyright © 2016. Published by Elsevier Inc.
The Breakthrough Listen Search for Intelligent Life: Data Calibration using Pulsars
NASA Astrophysics Data System (ADS)
Brinkman-Traverse, Casey Lynn; Gajjar, Vishal; BSRC
2018-01-01
The ability to distinguish ET signals requires a deep understanding of the radio telescopes with which we search; therefore, before we observe stars of interest, the Breathrough Listen scientists at Berkeley SETI Research Center first observe a Pulsar with well-documented flux and polarization properties. The process of calibrating the flux and polarization is a lengthy process by hand, so we produced a pipeline code that will automatically calibrate the pulsar in under an hour. Using PSRCHIVE the code coherently dedisperses the pulsed radio signals, and then calibrates the flux using observation files with a noise diode turning on and off. The code was developed using PSR B1937+ 21 and is primarily used on PSR B0329+54. This will expedite the process of assessing the quality of data collected from the Green Bank Telescope in West Virginia and will allow us to more efficiently find life beyond Planet Earth. Additionally, the stability of the B0329+54 calibration data will allow us to analyze data taken on FRB's with confidence of its cosmic origin.
NASA Astrophysics Data System (ADS)
Popov, V. N.; Botygin, I. A.; Kolochev, A. S.
2017-01-01
The approach allows representing data of international codes for exchange of meteorological information using metadescription as the formalism associated with certain categories of resources. Development of metadata components was based on an analysis of the data of surface meteorological observations, atmosphere vertical sounding, atmosphere wind sounding, weather radar observing, observations from satellites and others. A common set of metadata components was formed including classes, divisions and groups for a generalized description of the meteorological data. The structure and content of the main components of a generalized metadescription are presented in detail by the example of representation of meteorological observations from land and sea stations. The functional structure of a distributed computing system is described. It allows organizing the storage of large volumes of meteorological data for their further processing in the solution of problems of the analysis and forecasting of climatic processes.
NASA Technical Reports Server (NTRS)
Gaffney, J. E., Jr.; Judge, R. W.
1981-01-01
A model of a software development process is described. The software development process is seen to consist of a sequence of activities, such as 'program design' and 'module development' (or coding). A manpower estimate is made by multiplying code size by the rates (man months per thousand lines of code) for each of the activities relevant to the particular case of interest and summing up the results. The effect of four objectively determinable factors (organization, software product type, computer type, and code type) on productivity values for each of nine principal software development activities was assessed. Four factors were identified which account for 39% of the observed productivity variation.
Water cycle algorithm: A detailed standard code
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.
Discrimination of correlated and entangling quantum channels with selective process tomography
Dumitrescu, Eugene; Humble, Travis S.
2016-10-10
The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitrescu, Eugene; Humble, Travis S.
The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less
Effect of two doses of ginkgo biloba extract (EGb 761) on the dual-coding test in elderly subjects.
Allain, H; Raoul, P; Lieury, A; LeCoz, F; Gandon, J M; d'Arbigny, P
1993-01-01
The subjects of this double-blind study were 18 elderly men and women (mean age, 69.3 years) with slight age-related memory impairment. In a crossover-study design, each subject received placebo or an extract of Ginkgo biloba (EGb 761) (320 mg or 600 mg) 1 hour before performing a dual-coding test that measures the speed of information processing; the test consists of several coding series of drawings and words presented at decreasing times of 1920, 960, 480, 240, and 120 ms. The dual-coding phenomenon (a break point between coding verbal material and images) was demonstrated in all the tests. After placebo, the break point was observed at 960 ms and dual coding beginning at 1920 ms. After each dose of the ginkgo extract, the break point (at 480 ms) and dual coding (at 960 ms) were significantly shifted toward a shorter presentation time, indicating an improvement in the speed of information processing.
NASA Astrophysics Data System (ADS)
Wielgosz, P. A.
In this year, the system of active geodetic GPS permanent stations is going to be estab- lished in Poland. This system should provide GPS observations for a wide spectrum of users, especially it will be a great opportunity for surveyors. Many of surveyors still use cheaper, single frequency receivers. This paper focuses on processing of single frequency GPS observations only. During processing of such observations the iono- sphere plays an important role, so we concentrated on the influence of the ionosphere on the positional coordinates. Twenty consecutive days of GPS data from 2001 year were processed to analyze the accuracy of a derived three-dimensional relative vec- tor position between GPS stations. Observations from two Polish EPN/IGS stations: BOGO and JOZE were used. In addition to, a new test station - IGIK was created. In this paper, the results of single frequency GPS observations processing in near real- time are presented. Baselines of 15, 27 and 42 kilometers and sessions of 1, 2, 3, 4, and 6 hours long were processed. While processing we used CODE (Centre for Orbit De- termination in Europe, Bern, Switzerland) predicted products: orbits and ionosphere info. These products are available in real-time and enable near real-time processing. Software Bernese v. 4.2 for Linux and BPE (Bernese Processing Engine) mode were used. These results are shown with a reference to dual frequency weekly solution (the best solution). Obtained GPS positional time and GPS baseline length dependency accuracy is presented for single frequency GPS observations.
The Performance and Observation of Action Shape Future Behaviour
ERIC Educational Resources Information Center
Welsh, Timothy N.; McDougall, Laura M.; Weeks, Daniel J.
2009-01-01
The observation of other people's actions plays an important role in shaping the perceptual, cognitive, and motor processes of the observer. It has been suggested that these social influences occur because the observation of action evokes a representation of that response in the observer and that these codes are subsequently accessed by other…
Development and feasibility testing of the Pediatric Emergency Discharge Interaction Coding Scheme.
Curran, Janet A; Taylor, Alexandra; Chorney, Jill; Porter, Stephen; Murphy, Andrea; MacPhee, Shannon; Bishop, Andrea; Haworth, Rebecca
2017-08-01
Discharge communication is an important aspect of high-quality emergency care. This study addresses the gap in knowledge on how to describe discharge communication in a paediatric emergency department (ED). The objective of this feasibility study was to develop and test a coding scheme to characterize discharge communication between health-care providers (HCPs) and caregivers who visit the ED with their children. The Pediatric Emergency Discharge Interaction Coding Scheme (PEDICS) and coding manual were developed following a review of the literature and an iterative refinement process involving HCP observations, inter-rater assessments and team consensus. The coding scheme was pilot-tested through observations of HCPs across a range of shifts in one urban paediatric ED. Overall, 329 patient observations were carried out across 50 observational shifts. Inter-rater reliability was evaluated in 16% of the observations. The final version of the PEDICS contained 41 communication elements. Kappa scores were greater than .60 for the majority of communication elements. The most frequently observed communication elements were under the Introduction node and the least frequently observed were under the Social Concerns node. HCPs initiated the majority of the communication. Pediatric Emergency Discharge Interaction Coding Scheme addresses an important gap in the discharge communication literature. The tool is useful for mapping patterns of discharge communication between HCPs and caregivers. Results from our pilot test identified deficits in specific areas of discharge communication that could impact adherence to discharge instructions. The PEDICS would benefit from further testing with a different sample of HCPs. © 2017 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry
2012-08-01
To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Surfing a spike wave down the ventral stream.
VanRullen, Rufin; Thorpe, Simon J
2002-10-01
Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.
Tan, Edwin T.; Martin, Sarah R.; Fortier, Michelle A.; Kain, Zeev N.
2012-01-01
Objective To develop and validate a behavioral coding measure, the Children's Behavior Coding System-PACU (CBCS-P), for children's distress and nondistress behaviors while in the postanesthesia recovery unit. Methods A multidisciplinary team examined videotapes of children in the PACU and developed a coding scheme that subsequently underwent a refinement process (CBCS-P). To examine the reliability and validity of the coding system, 121 children and their parents were videotaped during their stay in the PACU. Participants were healthy children undergoing elective, outpatient surgery and general anesthesia. The CBCS-P was utilized and objective data from medical charts (analgesic consumption and pain scores) were extracted to establish validity. Results Kappa values indicated good-to-excellent (κ's > .65) interrater reliability of the individual codes. The CBCS-P had good criterion validity when compared to children's analgesic consumption and pain scores. Conclusions The CBCS-P is a reliable, observational coding method that captures children's distress and nondistress postoperative behaviors. These findings highlight the importance of considering context in both the development and application of observational coding schemes. PMID:22167123
Spectral and Structure Modeling of Low and High Mass Young Stars Using a Radiative Trasnfer Code
NASA Astrophysics Data System (ADS)
Robson Rocha, Will; Pilling, Sergio
The spectroscopy data from space telescopes (ISO, Spitzer, Herchel) shows that in addition to dust grains (e.g. silicates), there is also the presence of the frozen molecular species (astrophysical ices, such as H _{2}O, CO, CO _{2}, CH _{3}OH) in the circumstellar environments. In this work we present a study of the modeling of low and high mass young stellar objects (YSOs), where we highlight the importance in the use of the astrophysical ices processed by the radiation (UV, cosmic rays) comes from stars in formation process. This is important to characterize the physicochemical evolution of the ices distributed by the protostellar disk and its envelope in some situations. To perform this analysis, we gathered (i) observational data from Infrared Space Observatory (ISO) related with low mass protostar Elias29 and high mass protostar W33A, (ii) absorbance experimental data in the infrared spectral range used to determinate the optical constants of the materials observed around this objects and (iii) a powerful radiative transfer code to simulate the astrophysical environment (RADMC-3D, Dullemond et al, 2012). Briefly, the radiative transfer calculation of the YSOs was done employing the RADMC-3D code. The model outputs were the spectral energy distribution and theoretical images in different wavelengths of the studied objects. The functionality of this code is based on the Monte Carlo methodology in addition to Mie theory for interaction among radiation and matter. The observational data from different space telescopes was used as reference for comparison with the modeled data. The optical constants in the infrared, used as input in the models, were calculated directly from absorbance data obtained in the laboratory of both unprocessed and processed simulated interstellar samples by using NKABS code (Rocha & Pilling 2014). We show from this study that some absorption bands in the infrared, observed in the spectrum of Elias29 and W33A can arises after the ices around the protostars were processed by the radiation comes from central object. In addition, we were able also to compare the observational data for this two objects with those obtained in the modeling. Authors would like to thanks the agencies FAPESP (JP#2009/18304-0 and PHD#2013/07657-5).
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.
1998-01-01
It is well known that the BER performance of a parallel concatenated turbo-code improves roughly as 1/N, where N is the information block length. However, it has been observed by Benedetto and Montorsi that for most parallel concatenated turbo-codes, the FER performance does not improve monotonically with N. In this report, we study the FER of turbo-codes, and the effects of their concatenation with an outer code. Two methods of concatenation are investigated: across several frames and within each frame. Some asymmetric codes are shown to have excellent FER performance with an information block length of 16384. We also show that the proposed outer coding schemes can improve the BER performance as well by eliminating pathological frames generated by the iterative MAP decoding process.
Developing and Modifying Behavioral Coding Schemes in Pediatric Psychology: A Practical Guide
McMurtry, C. Meghan; Chambers, Christine T.; Bakeman, Roger
2015-01-01
Objectives To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. Methods This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. Results A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Conclusions Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. PMID:25416837
Levesque, Eric; Hoti, Emir; de La Serna, Sofia; Habouchi, Houssam; Ichai, Philippe; Saliba, Faouzi; Samuel, Didier; Azoulay, Daniel
2013-03-01
In the French healthcare system, the intensive care budget allocated is directly dependent on the activity level of the center. To evaluate this activity level, it is necessary to code the medical diagnoses and procedures performed on Intensive Care Unit (ICU) patients. The aim of this study was to evaluate the effects of using an Intensive Care Information System (ICIS) on the incidence of coding errors and its impact on the ICU budget allocated. Since 2005, the documentation on and monitoring of every patient admitted to our ICU has been carried out using an ICIS. However, the coding process was performed manually until 2008. This study focused on two periods: the period of manual coding (year 2007) and the period of computerized coding (year 2008) which covered a total of 1403 ICU patients. The time spent on the coding process, the rate of coding errors (defined as patients missed/not coded or wrongly identified as undergoing major procedure/s) and the financial impact were evaluated for these two periods. With computerized coding, the time per admission decreased significantly (from 6.8 ± 2.8 min in 2007 to 3.6 ± 1.9 min in 2008, p<0.001). Similarly, a reduction in coding errors was observed (7.9% vs. 2.2%, p<0.001). This decrease in coding errors resulted in a reduced difference between the potential and real ICU financial supplements obtained in the respective years (€194,139 loss in 2007 vs. a €1628 loss in 2008). Using specific computer programs improves the intensive process of manual coding by shortening the time required as well as reducing errors, which in turn positively impacts the ICU budget allocation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Baudouin, Alexia; Clarys, David; Vanneste, Sandrine; Isingrini, Michel
2009-12-01
The aim of the present study was to examine executive dysfunctioning and decreased processing speed as potential mediators of age-related differences in episodic memory. We compared the performances of young and elderly adults in a free-recall task. Participants were also given tests to measure executive functions and perceptual processing speed and a coding task (the Digit Symbol Substitution Test, DSST). More precisely, we tested the hypothesis that executive functions would mediate the age-related differences observed in the free-recall task better than perceptual speed. We also tested the assumption that a coding task, assumed to involve both executive processes and perceptual speed, would be the best mediator of age-related differences in memory. Findings first confirmed that the DSST combines executive processes and perceptual speed. Secondly, they showed that executive functions are a significant mediator of age-related differences in memory, and that DSST performance is the best predictor.
ChromaStarPy: A Stellar Atmosphere and Spectrum Modeling and Visualization Lab in Python
NASA Astrophysics Data System (ADS)
Short, C. Ian; Bayer, Jason H. T.; Burns, Lindsey M.
2018-02-01
We announce ChromaStarPy, an integrated general stellar atmospheric modeling and spectrum synthesis code written entirely in python V. 3. ChromaStarPy is a direct port of the ChromaStarServer (CSServ) Java modeling code described in earlier papers in this series, and many of the associated JavaScript (JS) post-processing procedures have been ported and incorporated into CSPy so that students have access to ready-made data products. A python integrated development environment (IDE) allows a student in a more advanced course to experiment with the code and to graphically visualize intermediate and final results, ad hoc, as they are running it. CSPy allows students and researchers to compare modeled to observed spectra in the same IDE in which they are processing observational data, while having complete control over the stellar parameters affecting the synthetic spectra. We also take the opportunity to describe improvements that have been made to the related codes, ChromaStar (CS), CSServ, and ChromaStarDB (CSDB), that, where relevant, have also been incorporated into CSPy. The application may be found at the home page of the OpenStars project: http://www.ap.smu.ca/OpenStars/.
Baneyx, Audrey; Charlet, Jean; Jaulent, Marie-Christine
2007-01-01
Pathologies and acts are classified in thesauri to help physicians to code their activity. In practice, the use of thesauri is not sufficient to reduce variability in coding and thesauri are not suitable for computer processing. We think the automation of the coding task requires a conceptual modeling of medical items: an ontology. Our task is to help lung specialists code acts and diagnoses with software that represents medical knowledge of this concerned specialty by an ontology. The objective of the reported work was to build an ontology of pulmonary diseases dedicated to the coding process. To carry out this objective, we develop a precise methodological process for the knowledge engineer in order to build various types of medical ontologies. This process is based on the need to express precisely in natural language the meaning of each concept using differential semantics principles. A differential ontology is a hierarchy of concepts and relationships organized according to their similarities and differences. Our main research hypothesis is to apply natural language processing tools to corpora to develop the resources needed to build the ontology. We consider two corpora, one composed of patient discharge summaries and the other being a teaching book. We propose to combine two approaches to enrich the ontology building: (i) a method which consists of building terminological resources through distributional analysis and (ii) a method based on the observation of corpus sequences in order to reveal semantic relationships. Our ontology currently includes 1550 concepts and the software implementing the coding process is still under development. Results show that the proposed approach is operational and indicates that the combination of these methods and the comparison of the resulting terminological structures give interesting clues to a knowledge engineer for the building of an ontology.
Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin
2017-10-01
Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.
Dynamic Divisive Normalization Predicts Time-Varying Value Coding in Decision-Related Circuits
LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W.
2014-01-01
Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145
Framework GRASP: routine library for optimize processing of aerosol remote sensing observation
NASA Astrophysics Data System (ADS)
Fuertes, David; Torres, Benjamin; Dubovik, Oleg; Litvinov, Pavel; Lapyonok, Tatyana; Ducos, Fabrice; Aspetsberger, Michael; Federspiel, Christian
The present the development of a Framework for the Generalized Retrieval of Aerosol and Surface Properties (GRASP) developed by Dubovik et al., (2011). The framework is a source code project that attempts to strengthen the value of the GRASP inversion algorithm by transforming it into a library that will be used later for a group of customized application modules. The functions of the independent modules include the managing of the configuration of the code execution, as well as preparation of the input and output. The framework provides a number of advantages in utilization of the code. First, it implements loading data to the core of the scientific code directly from memory without passing through intermediary files on disk. Second, the framework allows consecutive use of the inversion code without the re-initiation of the core routine when new input is received. These features are essential for optimizing performance of the data production in processing of large observation sets, such as satellite images by the GRASP. Furthermore, the framework is a very convenient tool for further development, because this open-source platform is easily extended for implementing new features. For example, it could accommodate loading of raw data directly onto the inversion code from a specific instrument not included in default settings of the software. Finally, it will be demonstrated that from the user point of view, the framework provides a flexible, powerful and informative configuration system.
Developing and modifying behavioral coding schemes in pediatric psychology: a practical guide.
Chorney, Jill MacLaren; McMurtry, C Meghan; Chambers, Christine T; Bakeman, Roger
2015-01-01
To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Can a senior house officer's time be used more effectively?
Mitchell, J; Hayhurst, C; Robinson, S M
2004-09-01
To determine the amount of time senior house officers (SHO) spent performing tasks that could be delegated to a technician or administrative assistant and therefore to quantify the expected benefit that could be obtained by employing such physicians' assistants (PA). SHOs working in the emergency department were observed for one week by pre-clinical students who had been trained to code and time each task performed by SHOs. Activity was grouped into four categories (clinical, technical, administrative, and other). Those activities in the technical and administrative categories were those we believed could be performed by a PA. The SHOs worked 430 hours in total, of which only 25 hours were not coded due to lack of an observer. Of the 405 hours observed 86.2% of time was accounted for by the various codes. The process of taking a history and examining patients accounted for an average of 22% of coded time. Writing the patient's notes accounted for an average of 20% of coded time. Discussion with relatives and patients accounted for 4.7% of coded time and performing procedures accounted for 5.2% of coded time. On average across all shifts, 15% of coded time was spent doing either technical or administrative tasks. In this department an average of 15% of coded SHOs working time was spent performing administrative and technical tasks, rising to 17% of coded time during a night shift. This is equivalent to an average time of 78 minutes per 10 hour shift/SHO. Most tasks included in these categories could be performed by PAs thus potentially decreasing patient waiting times, improving risk management, allowing doctors to spend more time with their patients, and possibly improving doctors' training.
Peaches for Lunch: Creating and Using Visual Variables.
Cartwright, Elizabeth; Clegg, Adam LaVar
2017-01-01
In this article, I describe the process of systematically including nonverbal data in medical anthropology research. I demonstrate the process of visualizing and coding videotaped moments of life and show how we can analyze what is being done along with what is being said. I ground my discussion in toddler language socialization and then expand my observations to the realm of language pathologies. Aphasia from strokes, speech difficulties in neurologically based illnesses like Lou Gehrig's disease, and the variety of communication challenges that face those on the autism spectrum can all be studied in interesting ways by including precise descriptions of nonverbal actions. I discuss the process of recording and coding the data with the software Observer XT 11.5 by Noldus. This method of collecting and analyzing video data can be used for many anthropological questions, in addition to those concerned with communication.
Flowers, Natalie L
2010-01-01
CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.
Long non-coding RNAs and mRNAs profiling during spleen development in pig.
Che, Tiandong; Li, Diyan; Jin, Long; Fu, Yuhua; Liu, Yingkai; Liu, Pengliang; Wang, Yixin; Tang, Qianzi; Ma, Jideng; Wang, Xun; Jiang, Anan; Li, Xuewei; Li, Mingzhou
2018-01-01
Genome-wide transcriptomic studies in humans and mice have become extensive and mature. However, a comprehensive and systematic understanding of protein-coding genes and long non-coding RNAs (lncRNAs) expressed during pig spleen development has not been achieved. LncRNAs are known to participate in regulatory networks for an array of biological processes. Here, we constructed 18 RNA libraries from developing fetal pig spleen (55 days before birth), postnatal pig spleens (0, 30, 180 days and 2 years after birth), and the samples from the 2-year-old Wild Boar. A total of 15,040 lncRNA transcripts were identified among these samples. We found that the temporal expression pattern of lncRNAs was more restricted than observed for protein-coding genes. Time-series analysis showed two large modules for protein-coding genes and lncRNAs. The up-regulated module was enriched for genes related to immune and inflammatory function, while the down-regulated module was enriched for cell proliferation processes such as cell division and DNA replication. Co-expression networks indicated the functional relatedness between protein-coding genes and lncRNAs, which were enriched for similar functions over the series of time points examined. We identified numerous differentially expressed protein-coding genes and lncRNAs in all five developmental stages. Notably, ceruloplasmin precursor (CP), a protein-coding gene participating in antioxidant and iron transport processes, was differentially expressed in all stages. This study provides the first catalog of the developing pig spleen, and contributes to a fuller understanding of the molecular mechanisms underpinning mammalian spleen development.
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
RAY-RAMSES: a code for ray tracing on the fly in N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barreira, Alexandre; Llinares, Claudio; Bose, Sownak
2016-05-01
We present a ray tracing code to compute integrated cosmological observables on the fly in AMR N-body simulations. Unlike conventional ray tracing techniques, our code takes full advantage of the time and spatial resolution attained by the N-body simulation by computing the integrals along the line of sight on a cell-by-cell basis through the AMR simulation grid. Moroever, since it runs on the fly in the N-body run, our code can produce maps of the desired observables without storing large (or any) amounts of data for post-processing. We implemented our routines in the RAMSES N-body code and tested the implementationmore » using an example of weak lensing simulation. We analyse basic statistics of lensing convergence maps and find good agreement with semi-analytical methods. The ray tracing methodology presented here can be used in several cosmological analysis such as Sunyaev-Zel'dovich and integrated Sachs-Wolfe effect studies as well as modified gravity. Our code can also be used in cross-checks of the more conventional methods, which can be important in tests of theory systematics in preparation for upcoming large scale structure surveys.« less
Pattern-based integer sample motion search strategies in the context of HEVC
NASA Astrophysics Data System (ADS)
Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas
2015-09-01
The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.
ERIC Educational Resources Information Center
Monaghan, Padraic; Rowland, Caroline F.
2017-01-01
Historically, first language acquisition research was a painstaking process of observation, requiring the laborious hand coding of children's linguistic productions, followed by the generation of abstract theoretical proposals for how the developmental process unfolds. Recently, the ability to collect large-scale corpora of children's language…
Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.
Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W
2014-11-26
Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. Copyright © 2014 the authors 0270-6474/14/3416046-12$15.00/0.
Pesch, Megan H; Lumeng, Julie C
2017-12-15
Behavioral coding of videotaped eating and feeding interactions can provide researchers with rich observational data and unique insights into eating behaviors, food intake, food selection as well as interpersonal and mealtime dynamics of children and their families. Unlike self-report measures of eating and feeding practices, the coding of videotaped eating and feeding behaviors can allow for the quantitative and qualitative examinations of behaviors and practices that participants may not self-report. While this methodology is increasingly more common, behavioral coding protocols and methodology are not widely shared in the literature. This has important implications for validity and reliability of coding schemes across settings. Additional guidance on how to design, implement, code and analyze videotaped eating and feeding behaviors could contribute to advancing the science of behavioral nutrition. The objectives of this narrative review are to review methodology for the design, operationalization, and coding of videotaped behavioral eating and feeding data in children and their families, and to highlight best practices. When capturing eating and feeding behaviors through analysis of videotapes, it is important for the study and coding to be hypothesis driven. Study design considerations include how to best capture the target behaviors through selection of a controlled experimental laboratory environment versus home mealtime, duration of video recording, number of observations to achieve reliability across eating episodes, as well as technical issues in video recording and sound quality. Study design must also take into account plans for coding the target behaviors, which may include behavior frequency, duration, categorization or qualitative descriptors. Coding scheme creation and refinement occur through an iterative process. Reliability between coders can be challenging to achieve but is paramount to the scientific rigor of the methodology. Analysis approach is dependent on the how data were coded and collapsed. Behavioral coding of videotaped eating and feeding behaviors can capture rich data "in-vivo" that is otherwise unobtainable from self-report measures. While data collection and coding are time-intensive the data yielded can be extremely valuable. Additional sharing of methodology and coding schemes around eating and feeding behaviors could advance the science and field.
Report from the Integrated Modeling Panel at the Workshop on the Science of Ignition on NIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marinak, M; Lamb, D
2012-07-03
This section deals with multiphysics radiation hydrodynamics codes used to design and simulate targets in the ignition campaign. These topics encompass all the physical processes they model, and include consideration of any approximations necessary due to finite computer resources. The section focuses on what developments would have the highest impact on reducing uncertainties in modeling most relevant to experimental observations. It considers how the ICF codes should be employed in the ignition campaign. This includes a consideration of how the experiments can be best structured to test the physical models the codes employ.
Molecular Dynamic Studies of Particle Wake Potentials in Plasmas
NASA Astrophysics Data System (ADS)
Ellis, Ian; Graziani, Frank; Glosli, James; Strozzi, David; Surh, Michael; Richards, David; Decyk, Viktor; Mori, Warren
2010-11-01
Fast Ignition studies require a detailed understanding of electron scattering, stopping, and energy deposition in plasmas with variable values for the number of particles within a Debye sphere. Presently there is disagreement in the literature concerning the proper description of these processes. Developing and validating proper descriptions requires studying the processes using first-principle electrostatic simulations and possibly including magnetic fields. We are using the particle-particle particle-mesh (P^3M) code ddcMD to perform these simulations. As a starting point in our study, we examined the wake of a particle passing through a plasma. In this poster, we compare the wake observed in 3D ddcMD simulations with that predicted by Vlasov theory and those observed in the electrostatic PIC code BEPS where the cell size was reduced to .03λD.
Subjective evaluation of next-generation video compression algorithms: a case study
NASA Astrophysics Data System (ADS)
De Simone, Francesca; Goldmann, Lutz; Lee, Jong-Seok; Ebrahimi, Touradj; Baroncini, Vittorio
2010-08-01
This paper describes the details and the results of the subjective quality evaluation performed at EPFL, as a contribution to the effort of the Joint Collaborative Team on Video Coding (JCT-VC) for the definition of the next-generation video coding standard. The performance of 27 coding technologies have been evaluated with respect to two H.264/MPEG-4 AVC anchors, considering high definition (HD) test material. The test campaign involved a total of 494 naive observers and took place over a period of four weeks. While similar tests have been conducted as part of the standardization process of previous video coding technologies, the test campaign described in this paper is by far the most extensive in the history of video coding standardization. The obtained subjective quality scores show high consistency and support an accurate comparison of the performance of the different coding solutions.
Perceptual consequences of disrupted auditory nerve activity.
Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold
2005-06-01
Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.
Farzandipour, Mehrdad; Sheikhtaheri, Abbas
2009-01-01
To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. “Recodes” were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by χ2 or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan
2016-09-01
Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Cyclotron resonant scattering feature simulations. II. Description of the CRSF simulation process
NASA Astrophysics Data System (ADS)
Schwarm, F.-W.; Ballhausen, R.; Falkner, S.; Schönherr, G.; Pottschmidt, K.; Wolff, M. T.; Becker, P. A.; Fürst, F.; Marcu-Cheatham, D. M.; Hemphill, P. B.; Sokolova-Lapa, E.; Dauser, T.; Klochkov, D.; Ferrigno, C.; Wilms, J.
2017-05-01
Context. Cyclotron resonant scattering features (CRSFs) are formed by scattering of X-ray photons off quantized plasma electrons in the strong magnetic field (of the order 1012 G) close to the surface of an accreting X-ray pulsar. Due to the complex scattering cross-sections, the line profiles of CRSFs cannot be described by an analytic expression. Numerical methods, such as Monte Carlo (MC) simulations of the scattering processes, are required in order to predict precise line shapes for a given physical setup, which can be compared to observations to gain information about the underlying physics in these systems. Aims: A versatile simulation code is needed for the generation of synthetic cyclotron lines. Sophisticated geometries should be investigatable by making their simulation possible for the first time. Methods: The simulation utilizes the mean free path tables described in the first paper of this series for the fast interpolation of propagation lengths. The code is parallelized to make the very time-consuming simulations possible on convenient time scales. Furthermore, it can generate responses to monoenergetic photon injections, producing Green's functions, which can be used later to generate spectra for arbitrary continua. Results: We develop a new simulation code to generate synthetic cyclotron lines for complex scenarios, allowing for unprecedented physical interpretation of the observed data. An associated XSPEC model implementation is used to fit synthetic line profiles to NuSTAR data of Cep X-4. The code has been developed with the main goal of overcoming previous geometrical constraints in MC simulations of CRSFs. By applying this code also to more simple, classic geometries used in previous works, we furthermore address issues of code verification and cross-comparison of various models. The XSPEC model and the Green's function tables are available online (see link in footnote, page 1).
Safety of vendor-prepared foods: evaluation of 10 processing mobile food vendors in Manhattan.
Burt, Bryan M; Volel, Caroline; Finkel, Madelon
2003-01-01
Unsanitary food handling is a major public health hazard. There are over 4,100 mobile food vendors operating in New York City, and of these, approximately forty percent are processing vendors--mobile food units on which potentially hazardous food products are handled, prepared, or processed. This pilot study assesses the food handling practices of 10 processing mobile food vendors operating in a 38-block area of midtown Manhattan (New York City) from 43rd Street to 62nd Street between Madison and Sixth Avenues, and compares them to regulations stipulated in the New York City Health Code. Ten processing mobile food vendors located in midtown Manhattan were observed for a period of 20 minutes each. Unsanitary food handling practices, food storage at potentially unsafe temperatures, and food contamination with uncooked meat or poultry were recorded. Over half of all vendors (67%) were found to contact served foods with bare hands. Four vendors were observed vending with visibly dirty hands or gloves and no vendor once washed his or her hands or changed gloves in the 20-minute observation period. Seven vendors had previously cooked meat products stored at unsafe temperatures on non-heating or non-cooking portions of the vendor cart for the duration of the observation. Four vendors were observed to contaminate served foods with uncooked meat or poultry. Each of these actions violates the New York City Code of Health and potentially jeopardizes the safety of these vendor-prepared foods. More stringent adherence to food safety regulations should be promoted by the New York City Department of Health.
Maximising information recovery from rank-order codes
NASA Astrophysics Data System (ADS)
Sen, B.; Furber, S.
2007-04-01
The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.
Nakamura, Brad J; Selbo-Bruns, Alexandra; Okamura, Kelsie; Chang, Jaime; Slavin, Lesley; Shimabukuro, Scott
2014-02-01
The purpose of this small pilot study was three-fold: (a) to begin development of a coding scheme for supervisor and therapist skill acquisition, (b) to preliminarily investigate a pilot train-the-trainer paradigm for skill development, and (c) to evaluate self-reported versus observed indicators of skill mastery in that pilot program. Participants included four supervisor-therapist dyads (N = 8) working with public mental health sector youth. Master trainers taught cognitive-behavioral therapy techniques to supervisors, who in turn trained therapists on these techniques. Supervisor and therapist skill acquisition and supervisor use of teaching strategies were repeatedly assessed through coding of scripted role-plays with a multiple-baseline across participants and behaviors design. The coding system, the Practice Element Train the Trainer - Supervisor/Therapist Versions of the Therapy Process Observational Coding System for Child Psychotherapy, was developed and evaluated though the course of the investigation. The coding scheme demonstrated excellent reliability (ICCs [1,2] = 0.81-0.91) across 168 video recordings. As calculated through within-subject effect sizes, supervisor and therapist participants, respectively, evidenced skill improvements related to teaching and performing therapy techniques. Self-reported indicators of skill mastery were inflated in comparison to observed skill mastery. Findings lend initial support for further developing an evaluative approach for a train-the-trainer effort focused on disseminating evidence-based practices. Published by Elsevier Ltd.
Multi-Region Boundary Element Analysis for Coupled Thermal-Fracturing Processes in Geomaterials
NASA Astrophysics Data System (ADS)
Shen, Baotang; Kim, Hyung-Mok; Park, Eui-Seob; Kim, Taek-Kon; Wuttke, Manfred W.; Rinne, Mikael; Backers, Tobias; Stephansson, Ove
2013-01-01
This paper describes a boundary element code development on coupled thermal-mechanical processes of rock fracture propagation. The code development was based on the fracture mechanics code FRACOD that has previously been developed by Shen and Stephansson (Int J Eng Fracture Mech 47:177-189, 1993) and FRACOM (A fracture propagation code—FRACOD, User's manual. FRACOM Ltd. 2002) and simulates complex fracture propagation in rocks governed by both tensile and shear mechanisms. For the coupled thermal-fracturing analysis, an indirect boundary element method, namely the fictitious heat source method, was implemented in FRACOD to simulate the temperature change and thermal stresses in rocks. This indirect method is particularly suitable for the thermal-fracturing coupling in FRACOD where the displacement discontinuity method is used for mechanical simulation. The coupled code was also extended to simulate multiple region problems in which rock mass, concrete linings and insulation layers with different thermal and mechanical properties were present. Both verification and application cases were presented where a point heat source in a 2D infinite medium and a pilot LNG underground cavern were solved and studied using the coupled code. Good agreement was observed between the simulation results, analytical solutions and in situ measurements which validates an applicability of the developed coupled code.
Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise
2018-05-01
Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.
Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.
2006-01-01
As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.
Quality improvement utilizing in-situ simulation for a dual-hospital pediatric code response team.
Yager, Phoebe; Collins, Corey; Blais, Carlene; O'Connor, Kathy; Donovan, Patricia; Martinez, Maureen; Cummings, Brian; Hartnick, Christopher; Noviski, Natan
2016-09-01
Given the rarity of in-hospital pediatric emergency events, identification of gaps and inefficiencies in the code response can be difficult. In-situ, simulation-based medical education programs can identify unrecognized systems-based challenges. We hypothesized that developing an in-situ, simulation-based pediatric emergency response program would identify latent inefficiencies in a complex, dual-hospital pediatric code response system and allow rapid intervention testing to improve performance before implementation at an institutional level. Pediatric leadership from two hospitals with a shared pediatric code response team employed the Institute for Healthcare Improvement's (IHI) Breakthrough Model for Collaborative Improvement to design a program consisting of Plan-Do-Study-Act cycles occurring in a simulated environment. The objectives of the program were to 1) identify inefficiencies in our pediatric code response; 2) correlate to current workflow; 3) employ an iterative process to test quality improvement interventions in a safe environment; and 4) measure performance before actual implementation at the institutional level. Twelve dual-hospital, in-situ, simulated, pediatric emergencies occurred over one year. The initial simulated event allowed identification of inefficiencies including delayed provider response, delayed initiation of cardiopulmonary resuscitation (CPR), and delayed vascular access. These gaps were linked to process issues including unreliable code pager activation, slow elevator response, and lack of responder familiarity with layout and contents of code cart. From first to last simulation with multiple simulated process improvements, code response time for secondary providers coming from the second hospital decreased from 29 to 7 min, time to CPR initiation decreased from 90 to 15 s, and vascular access obtainment decreased from 15 to 3 min. Some of these simulated process improvements were adopted into the institutional response while others continue to be trended over time for evidence that observed changes represent a true new state of control. Utilizing the IHI's Breakthrough Model, we developed a simulation-based program to 1) successfully identify gaps and inefficiencies in a complex, dual-hospital, pediatric code response system and 2) provide an environment in which to safely test quality improvement interventions before institutional dissemination. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-01-01
Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757
Kurt, Simone; Sausbier, Matthias; Rüttiger, Lukas; Brandt, Niels; Moeller, Christoph K.; Kindler, Jennifer; Sausbier, Ulrike; Zimmermann, Ulrike; van Straaten, Harald; Neuhuber, Winfried; Engel, Jutta; Knipper, Marlies; Ruth, Peter; Schulze, Holger
2012-01-01
Large conductance, voltage- and Ca2+-activated K+ (BK) channels in inner hair cells (IHCs) of the cochlea are essential for hearing. However, germline deletion of BKα, the pore-forming subunit KCNMA1 of the BK channel, surprisingly did not affect hearing thresholds in the first postnatal weeks, even though altered IHC membrane time constants, decreased IHC receptor potential alternating current/direct current ratio, and impaired spike timing of auditory fibers were reported in these mice. To investigate the role of IHC BK channels for central auditory processing, we generated a conditional mouse model with hair cell-specific deletion of BKα from postnatal day 10 onward. This had an unexpected effect on temporal coding in the central auditory system: neuronal single and multiunit responses in the inferior colliculus showed higher excitability and greater precision of temporal coding that may be linked to the improved discrimination of temporally modulated sounds observed in behavioral training. The higher precision of temporal coding, however, was restricted to slower modulations of sound and reduced stimulus-driven activity. This suggests a diminished dynamic range of stimulus coding that is expected to impair signal detection in noise. Thus, BK channels in IHCs are crucial for central coding of the temporal fine structure of sound and for detection of signals in a noisy environment.—Kurt, S., Sausbier, M., Rüttiger, L., Brandt, N., Moeller, C. K., Kindler, J., Sausbier, U., Zimmermann, U., van Straaten, H., Neuhuber, W., Engel, J., Knipper, M., Ruth, P., Schulze, H. Critical role for cochlear hair cell BK channels for coding the temporal structure and dynamic range of auditory information for central auditory processing. PMID:22691916
Theory of Mind: A Neural Prediction Problem
Koster-Hale, Jorie; Saxe, Rebecca
2014-01-01
Predictive coding posits that neural systems make forward-looking predictions about incoming information. Neural signals contain information not about the currently perceived stimulus, but about the difference between the observed and the predicted stimulus. We propose to extend the predictive coding framework from high-level sensory processing to the more abstract domain of theory of mind; that is, to inferences about others’ goals, thoughts, and personalities. We review evidence that, across brain regions, neural responses to depictions of human behavior, from biological motion to trait descriptions, exhibit a key signature of predictive coding: reduced activity to predictable stimuli. We discuss how future experiments could distinguish predictive coding from alternative explanations of this response profile. This framework may provide an important new window on the neural computations underlying theory of mind. PMID:24012000
Formability analysis of aluminum alloys through deep drawing process
NASA Astrophysics Data System (ADS)
Pranavi, U.; Janaki Ramulu, Perumalla; Chandramouli, Ch; Govardhan, Dasari; Prasad, PVS. Ram
2016-09-01
Deep drawing process is a significant metal forming process used in the sheet metal forming operations. From this process complex shapes can be manufactured with fewer defects. Deep drawing process has different effectible process parameters from which an optimum level of parameters should be identified so that an efficient final product with required mechanical properties will be obtained. The present work is to evaluate the formability of Aluminum alloy sheets using deep drawing process. In which effects of punch radius, lubricating conditions, die radius, and blank holding forces on deep drawing process observed for AA 6061 aluminum alloy sheet of 2 mm thickness. The numerical simulations are performed for deep drawing of square cups using three levels of aforesaid parameters like lubricating conditions and blank holding forces and two levels of punch radii and die radii. For numerical simulation a commercial FEM code is used in which Hollomon's power law and Hill's 1948 yield criterions are implemented. The deep drawing setup used in the FEM code is modeled using a CAD tool by considering the modeling requirements from the literature. Two different strain paths (150x150mm and 200x200mm) are simulated. Punch forces, thickness distributions and dome heights are evaluated for all the conditions. In addition failure initiation and propagation is also observed. From the results, by increasing the coefficient of friction and blank holding force, punch force, thickness distribution and dome height variations are observed. The comparison has done and the optimistic parameters were suggested from the results. From this work one can predict the formability for different strain paths without experimentation.
Sputtering of rough surfaces: a 3D simulation study
NASA Astrophysics Data System (ADS)
von Toussaint, U.; Mutzke, A.; Manhard, A.
2017-12-01
The lifetime of plasma-facing components is critical for future magnetic confinement fusion power plants. A key process limiting the lifetime of the first-wall is sputtering by energetic ions. To provide a consistent modeling of the sputtering process of realistic geometries, the SDTrimSP-code has been extended to enable the processing of analytic as well as measured arbitrary 3D surface morphologies. The code has been applied to study the effect of varying the impact angle of ions on rough surfaces on the sputter yield as well as the influence of the aspect ratio of surface structures on the 2D distribution of the local sputtering yields. Depending on the surface morphologies reductions of the effective sputter yields to less than 25% have been observed in the simulation results.
Weisberg, Jill; McCullough, Stephen; Emmorey, Karen
2018-01-01
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration. PMID:26177161
Quantitative data analysis to determine best food cooling practices in U.S. restaurants.
Schaffner, Donald W; Brown, Laura Green; Ripley, Danny; Reimann, Dave; Koktavy, Nicole; Blade, Henry; Nicholas, David
2015-04-01
Data collected by the Centers for Disease Control and Prevention (CDC) show that improper cooling practices contributed to more than 500 foodborne illness outbreaks associated with restaurants or delis in the United States between 1998 and 2008. CDC's Environmental Health Specialists Network (EHS-Net) personnel collected data in approximately 50 randomly selected restaurants in nine EHS-Net sites in 2009 to 2010 and measured the temperatures of cooling food at the beginning and the end of the observation period. Those beginning and ending points were used to estimate cooling rates. The most common cooling method was refrigeration, used in 48% of cooling steps. Other cooling methods included ice baths (19%), room-temperature cooling (17%), ice-wand cooling (7%), and adding ice or frozen food to the cooling food as an ingredient (2%). Sixty-five percent of cooling observations had an estimated cooling rate that was compliant with the 2009 Food and Drug Administration Food Code guideline (cooling to 41 °F [5 °C] in 6 h). Large cuts of meat and stews had the slowest overall estimated cooling rate, approximately equal to that specified in the Food Code guideline. Pasta and noodles were the fastest cooling foods, with a cooling time of just over 2 h. Foods not being actively monitored by food workers were more than twice as likely to cool more slowly than recommended in the Food Code guideline. Food stored at a depth greater than 7.6 cm (3 in.) was twice as likely to cool more slowly than specified in the Food Code guideline. Unventilated cooling foods were almost twice as likely to cool more slowly than specified in the Food Code guideline. Our data suggest that several best cooling practices can contribute to a proper cooling process. Inspectors unable to assess the full cooling process should consider assessing specific cooling practices as an alternative. Future research could validate our estimation method and study the effect of specific practices on the full cooling process.
Comparison of the thermal neutron scattering treatment in MCNP6 and GEANT4 codes
NASA Astrophysics Data System (ADS)
Tran, H. N.; Marchix, A.; Letourneau, A.; Darpentigny, J.; Menelle, A.; Ott, F.; Schwindling, J.; Chauvin, N.
2018-06-01
To ensure the reliability of simulation tools, verification and comparison should be made regularly. This paper describes the work performed in order to compare the neutron transport treatment in MCNP6.1 and GEANT4-10.3 in the thermal energy range. This work focuses on the thermal neutron scattering processes for several potential materials which would be involved in the neutron source designs of Compact Accelerator-based Neutrons Sources (CANS), such as beryllium metal, beryllium oxide, polyethylene, graphite, para-hydrogen, light water, heavy water, aluminium and iron. Both thermal scattering law and free gas model, coming from the evaluated data library ENDF/B-VII, were considered. It was observed that the GEANT4.10.03-patch2 version was not able to account properly the coherent elastic process occurring in crystal lattice. This bug is treated in this work and it should be included in the next release of the code. Cross section sampling and integral tests have been performed for both simulation codes showing a fair agreement between the two codes for most of the materials except for iron and aluminium.
Safety of vendor-prepared foods: evaluation of 10 processing mobile food vendors in Manhattan.
Burt, Bryan M.; Volel, Caroline; Finkel, Madelon
2003-01-01
OBJECTIVES: Unsanitary food handling is a major public health hazard. There are over 4,100 mobile food vendors operating in New York City, and of these, approximately forty percent are processing vendors--mobile food units on which potentially hazardous food products are handled, prepared, or processed. This pilot study assesses the food handling practices of 10 processing mobile food vendors operating in a 38-block area of midtown Manhattan (New York City) from 43rd Street to 62nd Street between Madison and Sixth Avenues, and compares them to regulations stipulated in the New York City Health Code. METHODS: Ten processing mobile food vendors located in midtown Manhattan were observed for a period of 20 minutes each. Unsanitary food handling practices, food storage at potentially unsafe temperatures, and food contamination with uncooked meat or poultry were recorded. RESULTS: Over half of all vendors (67%) were found to contact served foods with bare hands. Four vendors were observed vending with visibly dirty hands or gloves and no vendor once washed his or her hands or changed gloves in the 20-minute observation period. Seven vendors had previously cooked meat products stored at unsafe temperatures on non-heating or non-cooking portions of the vendor cart for the duration of the observation. Four vendors were observed to contaminate served foods with uncooked meat or poultry. CONCLUSIONS: Each of these actions violates the New York City Code of Health and potentially jeopardizes the safety of these vendor-prepared foods. More stringent adherence to food safety regulations should be promoted by the New York City Department of Health. PMID:12941860
Phonologically-Based Priming in the Same-Different Task With L1 Readers.
Lupker, Stephen J; Nakayama, Mariko; Yoshihara, Masahiro
2018-02-01
The present experiment provides an investigation of a promising new tool, the masked priming same-different task, for investigating the orthographic coding process. Orthographic coding is the process of establishing a mental representation of the letters and letter order in the word being read which is then used by readers to access higher-level (e.g., semantic) information about that word. Prior research (e.g., Norris & Kinoshita, 2008) had suggested that performance in this task may be based entirely on orthographic codes. As reported by Lupker, Nakayama, and Perea (2015a), however, in at least some circumstances, phonological codes also play a role. Specifically, even though their 2 languages are completely different orthographically, Lupker et al.'s Japanese-English bilinguals showed priming in this task when masked L1 primes were phonologically similar to L2 targets. An obvious follow-up question is whether Lupker et al.'s effect might have resulted from a strategy that was adopted by their bilinguals to aid in processing of, and memory for, the somewhat unfamiliar L2 targets. In the present experiment, Japanese readers responded to (Japanese) Kanji targets with phonologically identical primes (on "related" trials) being presented in a completely different but highly familiar Japanese script, Hiragana. Once again, significant priming effects were observed, indicating that, although performance in the masked priming same-different task may be mainly based on orthographic codes, phonological codes can play a role even when the stimuli being matched are familiar words from a reader's L1. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Parametric color coding of digital subtraction angiography.
Strother, C M; Bender, F; Deuerling-Zheng, Y; Royalty, K; Pulfer, K A; Baumgart, J; Zellerhoff, M; Aagaard-Kienitz, B; Niemann, D B; Lindstrom, M L
2010-05-01
Color has been shown to facilitate both visual search and recognition tasks. It was our purpose to examine the impact of a color-coding algorithm on the interpretation of 2D-DSA acquisitions by experienced and inexperienced observers. Twenty-six 2D-DSA acquisitions obtained as part of routine clinical care from subjects with a variety of cerebrovascular disease processes were selected from an internal data base so as to include a variety of disease states (aneurysms, AVMs, fistulas, stenosis, occlusions, dissections, and tumors). Three experienced and 3 less experienced observers were each shown the acquisitions on a prerelease version of a commercially available double-monitor workstation (XWP, Siemens Healthcare). Acquisitions were presented first as a subtracted image series and then as a single composite color-coded image of the entire acquisition. Observers were then asked a series of questions designed to assess the value of the color-coded images for the following purposes: 1) to enhance their ability to make a diagnosis, 2) to have confidence in their diagnosis, 3) to plan a treatment, and 4) to judge the effect of a treatment. The results were analyzed by using 1-sample Wilcoxon tests. Color-coded images enhanced the ease of evaluating treatment success in >40% of cases (P < .0001). They also had a statistically significant impact on treatment planning, making planning easier in >20% of the cases (P = .0069). In >20% of the examples, color-coding made diagnosis and treatment planning easier for all readers (P < .0001). Color-coding also increased the confidence of diagnosis compared with the use of DSA alone (P = .056). The impact of this was greater for the naïve readers than for the expert readers. At no additional cost in x-ray dose or contrast medium, color-coding of DSA enhanced the conspicuity of findings on DSA images. It was particularly useful in situations in which there was a complex flow pattern and in evaluation of pre- and posttreatment acquisitions. Its full potential remains to be defined.
NASA Astrophysics Data System (ADS)
Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.
2008-12-01
Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.
State Dependency of Chemosensory Coding in the Gustatory Thalamus (VPMpc) of Alert Rats
Liu, Haixin
2015-01-01
The parvicellular portion of the ventroposteromedial nucleus (VPMpc) is the part of the thalamus that processes gustatory information. Anatomical evidence shows that the VPMpc receives ascending gustatory inputs from the parabrachial nucleus (PbN) in the brainstem and sends projections to the gustatory cortex (GC). Although taste processing in PbN and GC has been the subject of intense investigation in behaving rodents, much less is known on how VPMpc neurons encode gustatory information. Here we present results from single-unit recordings in the VPMpc of alert rats receiving multiple tastants. Thalamic neurons respond to taste with time-varying modulations of firing rates, consistent with those observed in GC and PbN. These responses encode taste quality as well as palatability. Comparing responses to tastants either passively delivered, or self-administered after a cue, unveiled the effects of general expectation on taste processing in VPMpc. General expectation led to an improvement of taste coding by modulating response dynamics, and single neuron ability to encode multiple tastants. Our results demonstrate that the time course of taste coding as well as single neurons' ability to encode for multiple qualities are not fixed but rather can be altered by the state of the animal. Together, the data presented here provide the first description that taste coding in VPMpc is dynamic and state-dependent. SIGNIFICANCE STATEMENT Over the past years, a great deal of attention has been devoted to understanding taste coding in the brainstem and cortex of alert rodents. Thanks to this research, we now know that taste coding is dynamic, distributed, and context-dependent. Alas, virtually nothing is known on how the gustatory thalamus (VPMpc) processes gustatory information in behaving rats. This manuscript investigates taste processing in the VPMpc of behaving rats. Our results show that thalamic neurons encode taste and palatability with time-varying patterns of activity and that thalamic coding of taste is modulated by general expectation. Our data will appeal not only to researchers interested in taste, but also to a broader audience of sensory and systems neuroscientists interested in the thalamocortical system. PMID:26609147
Complete Decoding and Reporting of Aviation Routine Weather Reports (METARs)
NASA Technical Reports Server (NTRS)
Lui, Man-Cheung Max
2014-01-01
Aviation Routine Weather Report (METAR) provides surface weather information at and around observation stations, including airport terminals. These weather observations are used by pilots for flight planning and by air traffic service providers for managing departure and arrival flights. The METARs are also an important source of weather data for Air Traffic Management (ATM) analysts and researchers at NASA and elsewhere. These researchers use METAR to correlate severe weather events with local or national air traffic actions that restrict air traffic, as one example. A METAR is made up of multiple groups of coded text, each with a specific standard coding format. These groups of coded text are located in two sections of a report: Body and Remarks. The coded text groups in a U.S. METAR are intended to follow the coding standards set by National Oceanic and Atmospheric Administration (NOAA). However, manual data entry and edits made by a human report observer may result in coded text elements that do not follow the standards, especially in the Remarks section. And contrary to the standards, some significant weather observations are noted only in the Remarks section and not in the Body section of the reports. While human readers can infer the intended meaning of non-standard coding of weather conditions, doing so with a computer program is far more challenging. However such programmatic pre-processing is necessary to enable efficient and faster database query when researchers need to perform any significant historical weather analysis. Therefore, to support such analysis, a computer algorithm was developed to identify groups of coded text anywhere in a report and to perform subsequent decoding in software. The algorithm considers common deviations from the standards and data entry mistakes made by observers. The implemented software code was tested to decode 12 million reports and the decoding process was able to completely interpret 99.93 of the reports. This document presents the deviations from the standards and the decoding algorithm. Storing all decoded data in a database allows users to quickly query a large amount of data and to perform data mining on the data. Users can specify complex query criteria not only on date or airport but also on weather condition. This document also describes the design of a database schema for storing the decoded data, and a Data Warehouse web application that allows users to perform reporting and analysis on the decoded data. Finally, this document presents a case study correlating dust storms reported in METARs from the Phoenix International airport with Ground Stops issued by Air Route Traffic Control Centers (ATCSCC). Blowing widespread dust is one of the weather conditions when dust storm occurs. By querying the database, 294 METARs were found to report blowing widespread dust at the Phoenix airport and 41 of them reported such condition only in the Remarks section of the reports. When METAR is a data source for an ATM research, it is important to include weather conditions not only from the Body section but also from the Remarks section of METARs.
NASA Astrophysics Data System (ADS)
Hur, Min Young; Verboncoeur, John; Lee, Hae June
2014-10-01
Particle-in-cell (PIC) simulations have high fidelity in the plasma device requiring transient kinetic modeling compared with fluid simulations. It uses less approximation on the plasma kinetics but requires many particles and grids to observe the semantic results. It means that the simulation spends lots of simulation time in proportion to the number of particles. Therefore, PIC simulation needs high performance computing. In this research, a graphic processing unit (GPU) is adopted for high performance computing of PIC simulation for low temperature discharge plasmas. GPUs have many-core processors and high memory bandwidth compared with a central processing unit (CPU). NVIDIA GeForce GPUs were used for the test with hundreds of cores which show cost-effective performance. PIC code algorithm is divided into two modules which are a field solver and a particle mover. The particle mover module is divided into four routines which are named move, boundary, Monte Carlo collision (MCC), and deposit. Overall, the GPU code solves particle motions as well as electrostatic potential in two-dimensional geometry almost 30 times faster than a single CPU code. This work was supported by the Korea Institute of Science Technology Information.
Clustering of neural code words revealed by a first-order phase transition
NASA Astrophysics Data System (ADS)
Huang, Haiping; Toyoizumi, Taro
2016-06-01
A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., code words, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of code words in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural code words using retinal data by computing the entropy of code words as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal code words in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of code words and located within a reachable distance from most code words. This code-word space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the code-word space that shapes information representation in a biological network.
Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot
Raman, Rajani; Sarkar, Sandip
2016-01-01
Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812
Siemann, Julia; Petermann, Franz
2018-01-01
This review reconciles past findings on numerical processing with key assumptions of the most predominant model of arithmetic in the literature, the Triple Code Model (TCM). This is implemented by reporting diverse findings in the literature ranging from behavioral studies on basic arithmetic operations over neuroimaging studies on numerical processing to developmental studies concerned with arithmetic acquisition, with a special focus on developmental dyscalculia (DD). We evaluate whether these studies corroborate the model and discuss possible reasons for contradictory findings. A separate section is dedicated to the transfer of TCM to arithmetic development and to alternative accounts focusing on developmental questions of numerical processing. We conclude with recommendations for future directions of arithmetic research, raising questions that require answers in models of healthy as well as abnormal mathematical development. This review assesses the leading model in the field of arithmetic processing (Triple Code Model) by presenting knowledge from interdisciplinary research. It assesses the observed contradictory findings and integrates the resulting opposing viewpoints. The focus is on the development of arithmetic expertise as well as abnormal mathematical development. The original aspect of this article is that it points to a gap in research on these topics and provides possible solutions for future models. Copyright © 2017 Elsevier Ltd. All rights reserved.
GRay: A Massively Parallel GPU-based Code for Ray Tracing in Relativistic Spacetimes
NASA Astrophysics Data System (ADS)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal
2013-11-01
We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparing theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.
Statistical inference of static analysis rules
NASA Technical Reports Server (NTRS)
Engler, Dawson Richards (Inventor)
2009-01-01
Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.
Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-05-01
To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Assessing the Formation of Experience-Based Gender Expectations in an Implicit Learning Scenario
Öttl, Anton; Behne, Dawn M.
2017-01-01
The present study investigates the formation of new word-referent associations in an implicit learning scenario, using a gender-coded artificial language with spoken words and visual referents. Previous research has shown that when participants are explicitly instructed about the gender-coding system underlying an artificial lexicon, they monitor the frequency of exposure to male vs. female referents within this lexicon, and subsequently use this probabilistic information to predict the gender of an upcoming referent. In an explicit learning scenario, the auditory and visual gender cues are necessarily highlighted prior to acqusition, and the effects previously observed may therefore depend on participants' overt awareness of these cues. To assess whether the formation of experience-based expectations is dependent on explicit awareness of the underlying coding system, we present data from an experiment in which gender-coding was acquired implicitly, thereby reducing the likelihood that visual and auditory gender cues are used strategically during acquisition. Results show that even if the gender coding system was not perfectly mastered (as reflected in the number of gender coding errors), participants develop frequency based expectations comparable to those previously observed in an explicit learning scenario. In line with previous findings, participants are quicker at recognizing a referent whose gender is consistent with an induced expectation than one whose gender is inconsistent with an induced expectation. At the same time however, eyetracking data suggest that these expectations may surface earlier in an implicit learning scenario. These findings suggest that experience-based expectations are robust against manner of acquisition, and contribute to understanding why similar expectations observed in the activation of stereotypes during the processing of natural language stimuli are difficult or impossible to suppress. PMID:28936186
Parameterization of Small-Scale Processes
1989-09-01
1989, Honolulu, Hawaii !7 COSATI CODES 18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FELD GROUP SIJB- GROUP general...detailed sensitivit. studies to assess the dependence of results on the edd\\ viscosities and diffusivities by a direct comparison with certain observations...better sub-grid scale parameterization is to mount a concerted s .arch for model fits to observations. These would require exhaustive sensitivity studies
Research on pre-processing of QR Code
NASA Astrophysics Data System (ADS)
Sun, Haixing; Xia, Haojie; Dong, Ning
2013-10-01
QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2015-05-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. In previous papers, we demonstrate new possibility of the passive THz camera using for a temperature difference observing on the human skin if this difference is caused by different temperatures inside the body. For proof of validity of our statement we make the similar physical experiment using the IR camera. We show a possibility of temperature trace on human body skin, caused by changing of temperature inside the human body due to water drinking. We use as a computer code that is available for treatment of images captured by commercially available IR camera, manufactured by Flir Corp., as well as our developed computer code for computer processing of these images. Using both codes we demonstrate clearly changing of human body skin temperature induced by water drinking. Shown phenomena are very important for the detection of forbidden samples and substances concealed inside the human body using non-destructive control without X-rays using. Early we have demonstrated such possibility using THz radiation. Carried out experiments can be used for counter-terrorism problem solving. We developed original filters for computer processing of images captured by IR cameras. Their applications for computer processing of images results in a temperature resolution enhancing of cameras.
Tests of Exoplanet Atmospheric Radiative Transfer Codes
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Challener, Ryan; DeLarme, Emerson; Cubillos, Patricio; Blecic, Jasmina; Foster, Austin; Garland, Justin
2016-10-01
Atmospheric radiative transfer codes are used both to predict planetary spectra and in retrieval algorithms to interpret data. Observational plans, theoretical models, and scientific results thus depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. In the process of writing our own code, we became aware of several others with artifacts of unknown origin and even outright errors in their spectra. We present a series of tests to verify atmospheric radiative-transfer codes. These include: simple, single-line line lists that, when combined with delta-function abundance profiles, should produce a broadened line that can be verified easily; isothermal atmospheres that should produce analytically-verifiable blackbody spectra at the input temperatures; and model atmospheres with a range of complexities that can be compared to the output of other codes. We apply the tests to our own code, Bayesian Atmospheric Radiative Transfer (BART) and to several other codes. The test suite is open-source software. We propose this test suite as a standard for verifying current and future radiative transfer codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
MHD code using multi graphical processing units: SMAUG+
NASA Astrophysics Data System (ADS)
Gyenge, N.; Griffiths, M. K.; Erdélyi, R.
2018-01-01
This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.
NASA Technical Reports Server (NTRS)
Kalb, Michael; Robertson, Franklin; Jedlovec, Gary; Perkey, Donald
1987-01-01
Techniques by which mesoscale numerical weather prediction model output and radiative transfer codes are combined to simulate the radiance fields that a given passive temperature/moisture satellite sensor would see if viewing the evolving model atmosphere are introduced. The goals are to diagnose the dynamical atmospheric processes responsible for recurring patterns in observed satellite radiance fields, and to develop techniques to anticipate the ability of satellite sensor systems to depict atmospheric structures and provide information useful for numerical weather prediction (NWP). The concept of linking radiative transfer and dynamical NWP codes is demonstrated with time sequences of simulated radiance imagery in the 24 TIROS vertical sounder channels derived from model integrations for March 6, 1982.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
NASA Astrophysics Data System (ADS)
Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford
2018-04-01
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Functional dissociation of stimulus intensity encoding and predictive coding of pain in the insula
Geuter, Stephan; Boll, Sabrina; Eippert, Falk; Büchel, Christian
2017-01-01
The computational principles by which the brain creates a painful experience from nociception are still unknown. Classic theories suggest that cortical regions either reflect stimulus intensity or additive effects of intensity and expectations, respectively. By contrast, predictive coding theories provide a unified framework explaining how perception is shaped by the integration of beliefs about the world with mismatches resulting from the comparison of these beliefs against sensory input. Using functional magnetic resonance imaging during a probabilistic heat pain paradigm, we investigated which computations underlie pain perception. Skin conductance, pupil dilation, and anterior insula responses to cued pain stimuli strictly followed the response patterns hypothesized by the predictive coding model, whereas posterior insula encoded stimulus intensity. This novel functional dissociation of pain processing within the insula together with previously observed alterations in chronic pain offer a novel interpretation of aberrant pain processing as disturbed weighting of predictions and prediction errors. DOI: http://dx.doi.org/10.7554/eLife.24770.001 PMID:28524817
Neural code alterations and abnormal time patterns in Parkinson’s disease
NASA Astrophysics Data System (ADS)
Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo
2015-04-01
Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.
RAPTOR. I. Time-dependent radiative transfer in arbitrary spacetimes
NASA Astrophysics Data System (ADS)
Bronzwaer, T.; Davelaar, J.; Younsi, Z.; Mościbrodzka, M.; Falcke, H.; Kramer, M.; Rezzolla, L.
2018-05-01
Context. Observational efforts to image the immediate environment of a black hole at the scale of the event horizon benefit from the development of efficient imaging codes that are capable of producing synthetic data, which may be compared with observational data. Aims: We aim to present RAPTOR, a new public code that produces accurate images, animations, and spectra of relativistic plasmas in strong gravity by numerically integrating the equations of motion of light rays and performing time-dependent radiative transfer calculations along the rays. The code is compatible with any analytical or numerical spacetime. It is hardware-agnostic and may be compiled and run both on GPUs and CPUs. Methods: We describe the algorithms used in RAPTOR and test the code's performance. We have performed a detailed comparison of RAPTOR output with that of other radiative-transfer codes and demonstrate convergence of the results. We then applied RAPTOR to study accretion models of supermassive black holes, performing time-dependent radiative transfer through general relativistic magneto-hydrodynamical (GRMHD) simulations and investigating the expected observational differences between the so-called fast-light and slow-light paradigms. Results: Using RAPTOR to produce synthetic images and light curves of a GRMHD model of an accreting black hole, we find that the relative difference between fast-light and slow-light light curves is less than 5%. Using two distinct radiative-transfer codes to process the same data, we find integrated flux densities with a relative difference less than 0.01%. Conclusions: For two-dimensional GRMHD models, such as those examined in this paper, the fast-light approximation suffices as long as errors of a few percent are acceptable. The convergence of the results of two different codes demonstrates that they are, at a minimum, consistent. The public version of RAPTOR is available at the following URL: http://https://github.com/tbronzwaer/raptor
Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant.
Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa
2016-07-01
The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied.
GALARIO: a GPU accelerated library for analysing radio interferometer observations
NASA Astrophysics Data System (ADS)
Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo
2018-06-01
We present GALARIO, a computational library that exploits the power of modern graphical processing units (GPUs) to accelerate the analysis of observations from radio interferometers like Atacama Large Millimeter and sub-millimeter Array or the Karl G. Jansky Very Large Array. GALARIO speeds up the computation of synthetic visibilities from a generic 2D model image or a radial brightness profile (for axisymmetric sources). On a GPU, GALARIO is 150 faster than standard PYTHON and 10 times faster than serial C++ code on a CPU. Highly modular, easy to use, and to adopt in existing code, GALARIO comes as two compiled libraries, one for Nvidia GPUs and one for multicore CPUs, where both have the same functions with identical interfaces. GALARIO comes with PYTHON bindings but can also be directly used in C or C++. The versatility and the speed of GALARIO open new analysis pathways that otherwise would be prohibitively time consuming, e.g. fitting high-resolution observations of large number of objects, or entire spectral cubes of molecular gas emission. It is a general tool that can be applied to any field that uses radio interferometer observations. The source code is available online at http://github.com/mtazzari/galario under the open source GNU Lesser General Public License v3.
Correlated prompt fission data in transport simulations
Talou, P.; Vogt, R.; Randrup, J.; ...
2018-01-24
Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n -n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ raysmore » from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation, safeguards, nuclear energy, and defense programs. Here, this review provides an overview of the topic, starting from theoretical considerations of the fission process, with a focus on correlated signatures. It then explores the status of experimental correlated fission data and current efforts to address some of the known shortcomings. Numerical simulations employing the FREYA and CGMF codes are compared to experimental data for a wide range of correlated fission quantities. The inclusion of those codes into the MCNP6.2 and MCNPX - PoliMi transport codes is described and discussed in the context of relevant applications. The accuracy of the model predictions and their sensitivity to model assumptions and input parameters are discussed. Lastly, a series of important experimental and theoretical questions that remain unanswered are presented, suggesting a renewed effort to address these shortcomings.« less
Correlated prompt fission data in transport simulations
NASA Astrophysics Data System (ADS)
Talou, P.; Vogt, R.; Randrup, J.; Rising, M. E.; Pozzi, S. A.; Verbeke, J.; Andrews, M. T.; Clarke, S. D.; Jaffke, P.; Jandel, M.; Kawano, T.; Marcath, M. J.; Meierbachtol, K.; Nakae, L.; Rusev, G.; Sood, A.; Stetcu, I.; Walker, C.
2018-01-01
Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n - n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ rays from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation, safeguards, nuclear energy, and defense programs. This review provides an overview of the topic, starting from theoretical considerations of the fission process, with a focus on correlated signatures. It then explores the status of experimental correlated fission data and current efforts to address some of the known shortcomings. Numerical simulations employing the FREYA and CGMF codes are compared to experimental data for a wide range of correlated fission quantities. The inclusion of those codes into the MCNP6.2 and MCNPX - PoliMi transport codes is described and discussed in the context of relevant applications. The accuracy of the model predictions and their sensitivity to model assumptions and input parameters are discussed. Finally, a series of important experimental and theoretical questions that remain unanswered are presented, suggesting a renewed effort to address these shortcomings.
Correlated prompt fission data in transport simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talou, P.; Vogt, R.; Randrup, J.
Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n -n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ raysmore » from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation, safeguards, nuclear energy, and defense programs. Here, this review provides an overview of the topic, starting from theoretical considerations of the fission process, with a focus on correlated signatures. It then explores the status of experimental correlated fission data and current efforts to address some of the known shortcomings. Numerical simulations employing the FREYA and CGMF codes are compared to experimental data for a wide range of correlated fission quantities. The inclusion of those codes into the MCNP6.2 and MCNPX - PoliMi transport codes is described and discussed in the context of relevant applications. The accuracy of the model predictions and their sensitivity to model assumptions and input parameters are discussed. Lastly, a series of important experimental and theoretical questions that remain unanswered are presented, suggesting a renewed effort to address these shortcomings.« less
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...
2017-01-01
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
The neutral emergence of error minimized genetic codes superior to the standard genetic code.
Massey, Steven E
2016-11-07
The standard genetic code (SGC) assigns amino acids to codons in such a way that the impact of point mutations is reduced, this is termed 'error minimization' (EM). The occurrence of EM has been attributed to the direct action of selection, however it is difficult to explain how the searching of alternative codes for an error minimized code can occur via codon reassignments, given that these are likely to be disruptive to the proteome. An alternative scenario is that EM has arisen via the process of genetic code expansion, facilitated by the duplication of genes encoding charging enzymes and adaptor molecules. This is likely to have led to similar amino acids being assigned to similar codons. Strikingly, we show that if during code expansion the most similar amino acid to the parent amino acid, out of the set of unassigned amino acids, is assigned to codons related to those of the parent amino acid, then genetic codes with EM superior to the SGC easily arise. This scheme mimics code expansion via the gene duplication of charging enzymes and adaptors. The result is obtained for a variety of different schemes of genetic code expansion and provides a mechanistically realistic manner in which EM has arisen in the SGC. These observations might be taken as evidence for self-organization in the earliest stages of life. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gilmore-Bykovskyi, Andrea L
2015-01-01
Mealtime behavioral symptoms are distressing and frequently interrupt eating for the individual experiencing them and others in the environment. A computer-assisted coding scheme was developed to measure caregiver person-centeredness and behavioral symptoms for nursing home residents with dementia during mealtime interactions. The purpose of this pilot study was to determine the feasibility, ease of use, and inter-observer reliability of the coding scheme, and to explore the clinical utility of the coding scheme. Trained observers coded 22 observations. Data collection procedures were acceptable to participants. Overall, the coding scheme proved to be feasible, easy to execute and yielded good to very good inter-observer agreement following observer re-training. The coding scheme captured clinically relevant, modifiable antecedents to mealtime behavioral symptoms, but would be enhanced by the inclusion of measures for resident engagement and consolidation of items for measuring caregiver person-centeredness that co-occurred and were difficult for observers to distinguish. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Chaikovsky, A.; Dubovik, O.; Holben, Brent N.; Bril, A.; Goloub, P.; Tanre, D.; Pappalardo, G.; Wandinger, U.; Chaikovskaya, L.; Denisov, S.;
2015-01-01
This paper presents a detailed description of LIRIC (LIdar-Radiometer Inversion Code)algorithm for simultaneous processing of coincident lidar and radiometric (sun photometric) observations for the retrieval of the aerosol concentration vertical profiles. As the lidar radiometric input data we use measurements from European Aerosol Re-search Lidar Network (EARLINET) lidars and collocated sun-photometers of Aerosol Robotic Network (AERONET). The LIRIC data processing provides sequential inversion of the combined lidar and radiometric data by the estimations of column-integrated aerosol parameters from radiometric measurements followed by the retrieval of height-dependent concentrations of fine and coarse aerosols from lidar signals using integrated column characteristics of aerosol layer as a priori constraints. The use of polarized lidar observations allows us to discriminate between spherical and non-spherical particles of the coarse aerosol mode. The LIRIC software package was implemented and tested at a number of EARLINET stations. Inter-comparison of the LIRIC-based aerosol retrievals was performed for the observations by seven EARLNET lidars in Leipzig, Germany on 25 May 2009. We found close agreement between the aerosol parameters derived from different lidars that supports high robustness of the LIRIC algorithm. The sensitivity of the retrieval results to the possible reduction of the available observation data is also discussed.
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
Adaptation and perceptual norms
NASA Astrophysics Data System (ADS)
Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole
2007-02-01
We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saar, Martin O.; Seyfried, Jr., William E.; Longmire, Ellen K.
2016-06-24
A total of 12 publications and 23 abstracts were produced as a result of this study. In particular, the compilation of a thermodynamic database utilizing consistent, current thermodynamic data is a major step toward accurately modeling multi-phase fluid interactions with solids. Existing databases designed for aqueous fluids did not mesh well with existing solid phase databases. Addition of a second liquid phase (CO2) magnifies the inconsistencies between aqueous and solid thermodynamic databases. Overall, the combination of high temperature and pressure lab studies (task 1), using a purpose built apparatus, and solid characterization (task 2), using XRCT and more developed technologies,more » allowed observation of dissolution and precipitation processes under CO2 reservoir conditions. These observations were combined with results from PIV experiments on multi-phase fluids (task 3) in typical flow path geometries. The results of the tasks 1, 2, and 3 were compiled and integrated into numerical models utilizing Lattice-Boltzmann simulations (task 4) to realistically model the physical processes and were ultimately folded into TOUGH2 code for reservoir scale modeling (task 5). Compilation of the thermodynamic database assisted comparisons to PIV experiments (Task 3) and greatly improved Lattice Boltzmann (Task 4) and TOUGH2 simulations (Task 5). PIV (Task 3) and experimental apparatus (Task 1) have identified problem areas in TOUGHREACT code. Additional lab experiments and coding work has been integrated into an improved numerical modeling code.« less
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
The effects of articulatory suppression on word recognition in Serbian.
Tenjović, Lazar; Lalović, Dejan
2005-11-01
The relatedness of phonological coding to the articulatory mechanisms in visual word recognition vary in different writing systems. While articulatory suppression (i.e., continuous verbalising during a visual word processing task) has a detrimental effect on the processing of Japanese words printed in regular syllabic Khana script, it has no such effect on the processing of irregular alphabetic English words. Besner (1990) proposed an experiment in the Serbian language, written in Cyrillic and Roman regular but alphabetic scripts, to disentangle the importance of script regularity vs. the syllabic-alphabetic dimension for the effects observed. Articulatory suppression had an equally detrimental effect in a lexical decision task for both alphabetically regular and distorted (by a mixture of the two alphabets) Serbian words, but comparisons of articulatory suppression effect size obtained in Serbian to those obtained in English and Japanese suggest "alphabeticity-syllabicity" to be the more critical dimension in determining the relatedness of phonological coding and articulatory activity.
Studies of dynamic processes related to active experiments in space plasmas
NASA Technical Reports Server (NTRS)
Banks, Peter M.; Neubert, Torsten
1992-01-01
This is the final report for grant NAGw-2055, 'Studies of Dynamic Processes Related to Active Experiments in Space Plasmas', covering research performed at the University of Michigan. The grant was awarded to study: (1) theoretical and data analysis of data from the CHARGE-2 rocket experiment (1keV; 1-46 mA electron beam ejections) and the Spacelab-2 shuttle experiment (1keV; 100 mA); (2) studies of the interaction of an electron beam, emitted from an ionospheric platform, with the ambient neutral atmosphere and plasma by means of a newly developed computer simulation model, relating model predictions with CHARGE-2 observations of return currents observed during electron beam emissions; and (3) development of a self-consistent model for the charge distribution on a moving conducting tether in a magnetized plasma and for the potential structure in the plasma surrounding the tether. Our main results include: (1) the computer code developed for the interaction of electrons beams with the neutral atmosphere and plasma is able to model observed return fluxes to the CHARGE-2 sounding rocket payload; and (2) a 3-D electromagnetic and relativistic particle simulation code was developed.
Processing module operating methods, processing modules, and communications systems
McCown, Steven Harvey; Derr, Kurt W.; Moore, Troy
2014-09-09
A processing module operating method includes using a processing module physically connected to a wireless communications device, requesting that the wireless communications device retrieve encrypted code from a web site and receiving the encrypted code from the wireless communications device. The wireless communications device is unable to decrypt the encrypted code. The method further includes using the processing module, decrypting the encrypted code, executing the decrypted code, and preventing the wireless communications device from accessing the decrypted code. Another processing module operating method includes using a processing module physically connected to a host device, executing an application within the processing module, allowing the application to exchange user interaction data communicated using a user interface of the host device with the host device, and allowing the application to use the host device as a communications device for exchanging information with a remote device distinct from the host device.
Baddeley, Michelle; Tobler, Philippe N.; Schultz, Wolfram
2016-01-01
Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. SIGNIFICANCE STATEMENT Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result in loss of precise representation (e.g., when the avoidance of a loss in a loss-context is coded the same as receipt of a gain in a gain-context). We investigated an intermediate form of adaptation that is efficient while maintaining information about received gains and avoided losses. We found that frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Importantly, adaptation was intermediate, in line with influential models of reference dependence in behavioral economics. PMID:27683899
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Earl; Wiel, Scott Vander
This code implements the non-homogeneous poisson process model for estimating the rate of fast radio bursts. It includes modeling terms for the distribution of events in the Universe and the detection sensitivity of the radio telescopes and arrays used in observation. The model is described in LA-UR-16-26261.
Quantitative Data Analysis To Determine Best Food Cooling Practices in U.S. Restaurants†
Schaffner, Donald W.; Brown, Laura Green; Ripley, Danny; Reimann, Dave; Koktavy, Nicole; Blade, Henry; Nicholas, David
2017-01-01
Data collected by the Centers for Disease Control and Prevention (CDC) show that improper cooling practices contributed to more than 500 foodborne illness outbreaks associated with restaurants or delis in the United States between 1998 and 2008. CDC's Environmental Health Specialists Network (EHS-Net) personnel collected data in approximately 50 randomly selected restaurants in nine EHS-Net sites in 2009 to 2010 and measured the temperatures of cooling food at the beginning and the end of the observation period. Those beginning and ending points were used to estimate cooling rates. The most common cooling method was refrigeration, used in 48% of cooling steps. Other cooling methods included ice baths (19%), room-temperature cooling (17%), ice-wand cooling (7%), and adding ice or frozen food to the cooling food as an ingredient (2%). Sixty-five percent of cooling observations had an estimated cooling rate that was compliant with the 2009 Food and Drug Administration Food Code guideline (cooling to 41°F [5°C] in 6 h). Large cuts of meat and stews had the slowest overall estimated cooling rate, approximately equal to that specified in the Food Code guideline. Pasta and noodles were the fastest cooling foods, with a cooling time of just over 2 h. Foods not being actively monitored by food workers were more than twice as likely to cool more slowly than recommended in the Food Code guideline. Food stored at a depth greater than 7.6 cm (3 in.) was twice as likely to cool more slowly than specified in the Food Code guideline. Unventilated cooling foods were almost twice as likely to cool more slowly than specified in the Food Code guideline. Our data suggest that several best cooling practices can contribute to a proper cooling process. Inspectors unable to assess the full cooling process should consider assessing specific cooling practices as an alternative. Future research could validate our estimation method and study the effect of specific practices on the full cooling process. PMID:25836405
Experimental Demonstration of Fault-Tolerant State Preparation with Superconducting Qubits.
Takita, Maika; Cross, Andrew W; Córcoles, A D; Chow, Jerry M; Gambetta, Jay M
2017-11-03
Robust quantum computation requires encoding delicate quantum information into degrees of freedom that are hard for the environment to change. Quantum encodings have been demonstrated in many physical systems by observing and correcting storage errors, but applications require not just storing information; we must accurately compute even with faulty operations. The theory of fault-tolerant quantum computing illuminates a way forward by providing a foundation and collection of techniques for limiting the spread of errors. Here we implement one of the smallest quantum codes in a five-qubit superconducting transmon device and demonstrate fault-tolerant state preparation. We characterize the resulting code words through quantum process tomography and study the free evolution of the logical observables. Our results are consistent with fault-tolerant state preparation in a protected qubit subspace.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
NASA Astrophysics Data System (ADS)
Testa, P.; Polito, V.; De Pontieu, B.; Carlsson, M.; Reale, F.; Allred, J. C.; Hansteen, V. H.
2017-12-01
We investigate coronal heating properties in active region cores in non-flaring conditions, using high spatial, spectral, and temporal resolution chromospheric/transition region/coronal observations coupled with detailed modeling. We will focus, in particular, on observations with the Interface Region Imaging Spectrograph (IRIS), joint with observations with Hinode (XRT and EIS) and SDO/AIA. We will discuss how these observations and models (1D HD and 3D MHD, with the RADYN and Bifrost codes) provide useful diagnostics of the coronal heating processes and mechanisms of energy transport.
Bifrost: a Modular Python/C++ Framework for Development of High-Throughput Data Analysis Pipelines
NASA Astrophysics Data System (ADS)
Cranmer, Miles; Barsdell, Benjamin R.; Price, Danny C.; Garsden, Hugh; Taylor, Gregory B.; Dowell, Jayce; Schinzel, Frank; Costa, Timothy; Greenhill, Lincoln J.
2017-01-01
Large radio interferometers have data rates that render long-term storage of raw correlator data infeasible, thus motivating development of real-time processing software. For high-throughput applications, processing pipelines are challenging to design and implement. Motivated by science efforts with the Long Wavelength Array, we have developed Bifrost, a novel Python/C++ framework that eases the development of high-throughput data analysis software by packaging algorithms as black box processes in a directed graph. This strategy to modularize code allows astronomers to create parallelism without code adjustment. Bifrost uses CPU/GPU ’circular memory’ data buffers that enable ready introduction of arbitrary functions into the processing path for ’streams’ of data, and allow pipelines to automatically reconfigure in response to astrophysical transient detection or input of new observing settings. We have deployed and tested Bifrost at the latest Long Wavelength Array station, in Sevilleta National Wildlife Refuge, NM, where it handles throughput exceeding 10 Gbps per CPU core.
Processing concrete words: fMRI evidence against a specific right-hemisphere involvement.
Fiebach, Christian J; Friederici, Angela D
2004-01-01
Behavioral, patient, and electrophysiological studies have been taken as support for the assumption that processing of abstract words is confined to the left hemisphere, whereas concrete words are processed also by right-hemispheric brain areas. These are thought to provide additional information from an imaginal representational system, as postulated in the dual-coding theory of memory and cognition. Here we report new event-related fMRI data on the processing of concrete and abstract words in a lexical decision task. While abstract words activated a subregion of the left inferior frontal gyrus (BA 45) more strongly than concrete words, specific activity for concrete words was observed in the left basal temporal cortex. These data as well as data from other neuroimaging studies reviewed here are not compatible with the assumption of a specific right-hemispheric involvement for concrete words. The combined findings rather suggest a revised view of the neuroanatomical bases of the imaginal representational system assumed in the dual-coding theory, at least with respect to word recognition.
GRay: A MASSIVELY PARALLEL GPU-BASED CODE FOR RAY TRACING IN RELATIVISTIC SPACETIMES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal
We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparingmore » theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.« less
Cognitive-Processing Bias in Chinese Student Teachers with Strong and Weak Professional Identity.
Wang, Xin-Qiang; Zhu, Jun-Cheng; Liu, Lu; Chen, Xiang-Yu
2017-01-01
Professional identity plays an important role in career development. Although many studies have examined professional identity, differences in cognitive-processing biases between Chinese student teachers with strong and weak professional identity are poorly understood. The current study adopted Tversky's social-cognitive experimental paradigm to explore cognitive-processing biases in Chinese student teachers with strong and weak professional identity. Experiment 1 showed that participants with strong professional identity exhibited stronger positive-coding bias toward positive profession-related life events, relative to that observed in those with weak professional identity. Experiment 2 showed that participants with strong professional identity exhibited greater recognition bias for previously read items, relative to that observed in those with weak professional identity. Overall, the results suggested that participants with strong professional identity exhibited greater positive cognitive-processing bias relative to that observed in those with weak professional identity.
Cognitive-Processing Bias in Chinese Student Teachers with Strong and Weak Professional Identity
Wang, Xin-qiang; Zhu, Jun-cheng; Liu, Lu; Chen, Xiang-yu
2017-01-01
Professional identity plays an important role in career development. Although many studies have examined professional identity, differences in cognitive-processing biases between Chinese student teachers with strong and weak professional identity are poorly understood. The current study adopted Tversky’s social-cognitive experimental paradigm to explore cognitive-processing biases in Chinese student teachers with strong and weak professional identity. Experiment 1 showed that participants with strong professional identity exhibited stronger positive-coding bias toward positive profession-related life events, relative to that observed in those with weak professional identity. Experiment 2 showed that participants with strong professional identity exhibited greater recognition bias for previously read items, relative to that observed in those with weak professional identity. Overall, the results suggested that participants with strong professional identity exhibited greater positive cognitive-processing bias relative to that observed in those with weak professional identity. PMID:28555123
Nieder, Andreas; Miller, Earl K
2003-01-09
Whether cognitive representations are better conceived as language-based, symbolic representations or perceptually related, analog representations is a subject of debate. If cognitive processes parallel perceptual processes, then fundamental psychophysical laws should hold for each. To test this, we analyzed both behavioral and neuronal representations of numerosity in the prefrontal cortex of rhesus monkeys. The data were best described by a nonlinearly compressed scaling of numerical information, as postulated by the Weber-Fechner law or Stevens' law for psychophysical/sensory magnitudes. This nonlinear compression was observed on the neural level during the acquisition phase of the task and maintained through the memory phase with no further compression. These results suggest that certain cognitive and perceptual/sensory representations share the same fundamental mechanisms and neural coding schemes.
PROcess Based Diagnostics PROBE
NASA Technical Reports Server (NTRS)
Clune, T.; Schmidt, G.; Kuo, K.; Bauer, M.; Oloso, H.
2013-01-01
Many of the aspects of the climate system that are of the greatest interest (e.g., the sensitivity of the system to external forcings) are emergent properties that arise via the complex interplay between disparate processes. This is also true for climate models most diagnostics are not a function of an isolated portion of source code, but rather are affected by multiple components and procedures. Thus any model-observation mismatch is hard to attribute to any specific piece of code or imperfection in a specific model assumption. An alternative approach is to identify diagnostics that are more closely tied to specific processes -- implying that if a mismatch is found, it should be much easier to identify and address specific algorithmic choices that will improve the simulation. However, this approach requires looking at model output and observational data in a more sophisticated way than the more traditional production of monthly or annual mean quantities. The data must instead be filtered in time and space for examples of the specific process being targeted.We are developing a data analysis environment called PROcess-Based Explorer (PROBE) that seeks to enable efficient and systematic computation of process-based diagnostics on very large sets of data. In this environment, investigators can define arbitrarily complex filters and then seamlessly perform computations in parallel on the filtered output from their model. The same analysis can be performed on additional related data sets (e.g., reanalyses) thereby enabling routine comparisons between model and observational data. PROBE also incorporates workflow technology to automatically update computed diagnostics for subsequent executions of a model. In this presentation, we will discuss the design and current status of PROBE as well as share results from some preliminary use cases.
Pre- and Post-Processing Tools to Streamline the CFD Process
NASA Technical Reports Server (NTRS)
Dorney, Suzanne Miller
2002-01-01
This viewgraph presentation provides information on software development tools to facilitate the use of CFD (Computational Fluid Dynamics) codes. The specific CFD codes FDNS and CORSAIR are profiled, and uses for software development tools with these codes during pre-processing, interim-processing, and post-processing are explained.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-19
...: Notice. SUMMARY: The DOE participates in the code development process of the International Code Council... notice outlines the process by which DOE produces code change proposals, and participates in the ICC code development process. FOR FURTHER INFORMATION CONTACT: Jeremiah Williams, U.S. Department of Energy, Office of...
LOINC, a universal standard for identifying laboratory observations: a 5-year update.
McDonald, Clement J; Huff, Stanley M; Suico, Jeffrey G; Hill, Gilbert; Leavelle, Dennis; Aller, Raymond; Forrey, Arden; Mercer, Kathy; DeMoor, Georges; Hook, John; Williams, Warren; Case, James; Maloney, Pat
2003-04-01
The Logical Observation Identifier Names and Codes (LOINC) database provides a universal code system for reporting laboratory and other clinical observations. Its purpose is to identify observations in electronic messages such as Health Level Seven (HL7) observation messages, so that when hospitals, health maintenance organizations, pharmaceutical manufacturers, researchers, and public health departments receive such messages from multiple sources, they can automatically file the results in the right slots of their medical records, research, and/or public health systems. For each observation, the database includes a code (of which 25 000 are laboratory test observations), a long formal name, a "short" 30-character name, and synonyms. The database comes with a mapping program called Regenstrief LOINC Mapping Assistant (RELMA(TM)) to assist the mapping of local test codes to LOINC codes and to facilitate browsing of the LOINC results. Both LOINC and RELMA are available at no cost from http://www.regenstrief.org/loinc/. The LOINC medical database carries records for >30 000 different observations. LOINC codes are being used by large reference laboratories and federal agencies, e.g., the CDC and the Department of Veterans Affairs, and are part of the Health Insurance Portability and Accountability Act (HIPAA) attachment proposal. Internationally, they have been adopted in Switzerland, Hong Kong, Australia, and Canada, and by the German national standards organization, the Deutsches Instituts für Normung. Laboratories should include LOINC codes in their outbound HL7 messages so that clinical and research clients can easily integrate these results into their clinical and research repositories. Laboratories should also encourage instrument vendors to deliver LOINC codes in their instrument outputs and demand LOINC codes in HL7 messages they get from reference laboratories to avoid the need to lump so many referral tests under the "send out lab" code.
SIMINOFF, LAURA A.; STEP, MARY M.
2011-01-01
Many observational coding schemes have been offered to measure communication in health care settings. These schemes fall short of capturing multiple functions of communication among providers, patients, and other participants. After a brief review of observational communication coding, the authors present a comprehensive scheme for coding communication that is (a) grounded in communication theory, (b) accounts for instrumental and relational communication, and (c) captures important contextual features with tailored coding templates: the Siminoff Communication Content & Affect Program (SCCAP). To test SCCAP reliability and validity, the authors coded data from two communication studies. The SCCAP provided reliable measurement of communication variables including tailored content areas and observer ratings of speaker immediacy, affiliation, confirmation, and disconfirmation behaviors. PMID:21213170
MOCCA code for star cluster simulation: comparison with optical observations using COCOA
NASA Astrophysics Data System (ADS)
Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Olech, Arkadiusz; Hypki, Arkadiusz
2016-02-01
We introduce and present preliminary results from COCOA (Cluster simulatiOn Comparison with ObservAtions) code for a star cluster after 12 Gyr of evolution simulated using the MOCCA code. The COCOA code is being developed to quickly compare results of numerical simulations of star clusters with observational data. We use COCOA to obtain parameters of the projected cluster model. For comparison, a FITS file of the projected cluster was provided to observers so that they could use their observational methods and techniques to obtain cluster parameters. The results show that the similarity of cluster parameters obtained through numerical simulations and observations depends significantly on the quality of observational data and photometric accuracy.
A-Track: A New Approach for Detection of Moving Objects in FITS Images
NASA Astrophysics Data System (ADS)
Kılıç, Yücel; Karapınar, Nurdan; Atay, Tolga; Kaplan, Murat
2016-07-01
Small planet and asteroid observations are important for understanding the origin and evolution of the Solar System. In this work, we have developed a fast and robust pipeline, called A-Track, for detecting asteroids and comets in sequential telescope images. The moving objects are detected using a modified line detection algorithm, called ILDA. We have coded the pipeline in Python 3, where we have made use of various scientific modules in Python to process the FITS images. We tested the code on photometrical data taken by an SI-1100 CCD with a 1-meter telescope at TUBITAK National Observatory, Antalya. The pipeline can be used to analyze large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.
Advanced Chemical Propulsion Study
NASA Technical Reports Server (NTRS)
Woodcock, Gordon; Byers, Dave; Alexander, Leslie A.; Krebsbach, Al
2004-01-01
A study was performed of advanced chemical propulsion technology application to space science (Code S) missions. The purpose was to begin the process of selecting chemical propulsion technology advancement activities that would provide greatest benefits to Code S missions. Several missions were selected from Code S planning data, and a range of advanced chemical propulsion options was analyzed to assess capabilities and benefits re these missions. Selected beneficial applications were found for higher-performing bipropellants, gelled propellants, and cryogenic propellants. Technology advancement recommendations included cryocoolers and small turbopump engines for cryogenic propellants; space storable propellants such as LOX-hydrazine; and advanced monopropellants. It was noted that fluorine-bearing oxidizers offer performance gains over more benign oxidizers. Potential benefits were observed for gelled propellants that could be allowed to freeze, then thawed for use.
Perceiving groups: The people perception of diversity and hierarchy.
Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L
2018-05-01
The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Sansivero, Fabio; Vilardo, Giuseppe; Caputo, Teresa
2017-04-01
The permanent thermal infrared surveillance network of Osservatorio Vesuviano (INGV) is composed of 6 stations which acquire IR frames of fumarole fields in the Campi Flegrei caldera and inside the Vesuvius crater (Italy). The IR frames are uploaded to a dedicated server in the Surveillance Center of Osservatorio Vesuviano in order to process the infrared data and to excerpt all the information contained. In a first phase the infrared data are processed by an automated system (A.S.I.R.A. Acq- Automated System of IR Analysis and Acquisition) developed in Matlab environment and with a user-friendly graphic user interface (GUI). ASIRA daily generates time-series of residual temperature values of the maximum temperatures observed in the IR scenes after the removal of seasonal effects. These time-series are displayed in the Surveillance Room of Osservatorio Vesuviano and provide information about the evolution of shallow temperatures field of the observed areas. In particular the features of ASIRA Acq include: a) efficient quality selection of IR scenes, b) IR images co-registration in respect of a reference frame, c) seasonal correction by using a background-removal methodology, a) filing of IR matrices and of the processed data in shared archives accessible to interrogation. The daily archived records can be also processed by ASIRA Plot (Matlab code with GUI) to visualize IR data time-series and to help in evaluating inputs parameters for further data processing and analysis. Additional processing features are accomplished in a second phase by ASIRA Tools which is Matlab code with GUI developed to extract further information from the dataset in automated way. The main functions of ASIRA Tools are: a) the analysis of temperature variations of each pixel of the IR frame in a given time interval, b) the removal of seasonal effects from temperature of every pixel in the IR frames by using an analytic approach (removal of sinusoidal long term seasonal component by using a polynomial fit Matlab function - LTFC_SCOREF), c) the export of data in different raster formats (i.e. Surfer grd). An interesting example of elaborations of the data produced by ASIRA Tools is the map of the temperature changing rate, which provide remarkable information about the potential migration of fumarole activity. The high efficiency of Matlab in processing matrix data from IR scenes and the flexibility of this code-developing tool proved to be very useful to produce applications to use in volcanic surveillance aimed to monitor the evolution of surface temperatures field in diffuse degassing volcanic areas.
Samaranayake, N R; Cheung, S T D; Cheng, K; Lai, K; Chui, W C M; Cheung, B M Y
2014-06-01
We assessed the effects of a bar-code assisted medication administration system used without the support of computerised prescribing (stand-alone BCMA), on the dispensing process and its users. The stand-alone BCMA system was implemented in one ward of a teaching hospital. The number of dispensing steps, dispensing time and potential dispensing errors (PDEs) were directly observed one month before and eight months after the intervention. Attitudes of pharmacy and nursing staff were assessed using a questionnaire (Likert scale) and interviews. Among 1291 and 471 drug items observed before and after the introduction of the technology respectively, the number of dispensing steps increased from five to eight and time (standard deviation) to dispense one drug item by one staff personnel increased from 0.8 (0.09) to 1.5 (0.12) min. Among 2828 and 471 drug items observed before and after the intervention respectively, the number of PDEs increased significantly (P<0.001). 'Procedural errors' and 'missing drug items' were the frequently observed PDEs in the after study. 'Perceived usefulness' and 'job relevance' of the technology decreased significantly (P=0.003 and P=0.004 respectively) among users who participated in the before (N=16) and after (N=16) questionnaires surveys. Among the interviewees, pharmacy staff felt that the system offered less benefit to the dispensing process (9/16). Nursing staff perceived the system as useful in improving the accuracy of drug administration (7/10). Implementing a stand-alone BCMA system may slow down and complicate the dispensing process. Nursing staff believe the stand-alone BCMA system could improve the drug administration process but pharmacy staff believes the technology would be more helpful if supported by computerised prescribing. However, periodical assessments are needed to identify weaknesses in the process after implementation, and all users should be educated on the benefits of using this technology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ott, Stephan; Herschel Science Ground Segment Consortium
2010-05-01
The Herschel Space Observatory, the fourth cornerstone mission in the ESA science program, was launched 14th of May 2009. With a 3.5 m telescope, it is the largest space telescope ever launched. Herschel's three instruments (HIFI, PACS, and SPIRE) perform photometry and spectroscopy in the 55 - 672 micron range and will deliver exciting science for the astronomical community during at least three years of routine observations. Since 2nd of December 2009 Herschel has been performing and processing observations in routine science mode. The development of the Herschel Data Processing System started eight years ago to support the data analysis for Instrument Level Tests. To fulfil the expectations of the astronomical community, additional resources were made available to implement a freely distributable Data Processing System capable of interactively and automatically reducing Herschel data at different processing levels. The system combines data retrieval, pipeline execution and scientific analysis in one single environment. The Herschel Interactive Processing Environment (HIPE) is the user-friendly face of Herschel Data Processing. The software is coded in Java and Jython to be platform independent and to avoid the need for commercial licenses. It is distributed under the GNU Lesser General Public License (LGPL), permitting everyone to access and to re-use its code. We will summarise the current capabilities of the Herschel Data Processing System and give an overview about future development milestones and plans, and how the astronomical community can contribute to HIPE. The Herschel Data Processing System is a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortium members.
Olier, Ivan; Springate, David A; Ashcroft, Darren M; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos
2016-01-01
The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists.
Provisional Coding Practices: Are They Really a Waste of Time?
Krypuy, Matthew; McCormack, Lena
2006-11-01
In order to facilitate effective clinical coding and hence the precise financial reimbursement of acute services, in 2005 Western District Health Service (WDHS) (located in regional Victoria, Australia) undertook a provisional coding trial for inpatient medical episodes to determine the magnitude and accuracy of clinical documentation. Utilising clinical coding software installed on a laptop computer, provisional coding was undertaken for all current overnight inpatient episodes under each physician one day prior to attending their daily ward round. The provisionally coded episodes were re-coded upon the completion of the discharge summary and the final Diagnostic Related Group (DRG) allocation and weight were compared to the provisional DRG assignment. A total of 54 out of 220 inpatient medical episodes were provisionally coded. This represented approximately a 25% cross section of the population selected for observation. Approximately 67.6% of the provisionally allocated DRGs were accurate in contrast to 32.4% which were subject to change once the discharge summary was completed. The DRG changes were primarily due to: disease progression of a patient during their care episode which could not be identified by clinical coding staff due to discharge prior to the following scheduled ward round; the discharge destination of particular patients; and the accuracy of clinical documentation on the discharge summary. The information gathered from the provisional coding trial supported the hypothesis that clinical documentation standards were sufficient and adequate to support precise clinical coding and DRG assignment at WDHS. The trial further highlighted the importance of a complete and accurate discharge summary available during the coding process of acute inpatient episodes.
Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images
NASA Technical Reports Server (NTRS)
Fischer, Bernd
2004-01-01
Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.
COCOA code for creating mock observations of star cluster models
NASA Astrophysics Data System (ADS)
Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele
2018-04-01
We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the COCOA code and demonstrate its different applications by utilizing globular cluster (GC) models simulated with the MOCCA (MOnte Carlo Cluster simulAtor) code. COCOA is used to synthetically observe these different GC models with optical telescopes, perform point spread function photometry, and subsequently produce observed colour-magnitude diagrams. We also use COCOA to compare the results from synthetic observations of a cluster model that has the same age and metallicity as the Galactic GC NGC 2808 with observations of the same cluster carried out with a 2.2 m optical telescope. We find that COCOA can effectively simulate realistic observations and recover photometric data. COCOA has numerous scientific applications that maybe be helpful for both theoreticians and observers that work on star clusters. Plans for further improving and developing the code are also discussed in this paper.
Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E
2017-04-15
Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (p<0.001) for predicting the task being performed within each scan using artifact-cleaned components. The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy compared to the ICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<0.001). Lower classification accuracy occurred when the extracted spatial maps contained more CSF regions (p<0.001). The success of sparse coding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.
Matney, Susan; Bakken, Suzanne; Huff, Stanley M
2003-01-01
In recent years, the Logical Observation Identifiers, Names, and Codes (LOINC) Database has been expanded to include assessment items of relevance to nursing and in 2002 met the criteria for "recognition" by the American Nurses Association. Assessment measures in LOINC include those related to vital signs, obstetric measurements, clinical assessment scales, assessments from standardized nursing terminologies, and research instruments. In order for LOINC to be of greater use in implementing information systems that support nursing practice, additional content is needed. Moreover, those implementing systems for nursing practice must be aware of the manner in which LOINC codes for assessments can be appropriately linked with other aspects of the nursing process such as diagnoses and interventions. Such linkages are necessary to document nursing contributions to healthcare outcomes within the context of a multidisciplinary care environment and to facilitate building of nursing knowledge from clinical practice. The purposes of this paper are to provide an overview of the LOINC database, to describe examples of assessments of relevance to nursing contained in LOINC, and to illustrate linkages of LOINC assessments with other nursing concepts.
Stimulus information contaminates summation tests of independent neural representations of features
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2002-01-01
Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
Grenier, Christophe; Anbergen, Hauke; Bense, Victor; ...
2018-02-26
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grenier, Christophe; Anbergen, Hauke; Bense, Victor
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less
Updates to the CMAQ Post Processing and Evaluation Tools for 2016
In the spring of 2016, the evaluation tools distributed with the CMAQ model code were updated and new tools were added to the existing set of tools. Observation data files, compatible with the AMET software, were also made available on the CMAS website for the first time with the...
ERIC Educational Resources Information Center
Chiu, Angela W.; McLeod, Bryce D.; Har, Kim; Wood, Jeffrey J.
2009-01-01
Background: Few studies have examined the link between child-therapist alliance and outcome in manual-guided cognitive behavioral therapy (CBT) for children diagnosed with anxiety disorders. This study sought to clarify the nature and strength of this relation. Methods: The Therapy Process Observational Coding System for Child…
Nonassertive Mothers, Aggressive Teens: Toughlove as a Community Intervention.
ERIC Educational Resources Information Center
Klug, Wayne
A process study was conducted in two phases to measure the effects of whether a mother's participation in a Toughlove program improved her child's behavior. During phase 1, small-group Toughlove meetings were used for observation and then were tape-recorded and transcribed. Transcriptions were coded to identify instances of social support;…
ZASPE: A Code to Measure Stellar Atmospheric Parameters and their Covariance from Spectra
NASA Astrophysics Data System (ADS)
Brahm, Rafael; Jordán, Andrés; Hartman, Joel; Bakos, Gáspár
2017-05-01
We describe the Zonal Atmospheric Stellar Parameters Estimator (zaspe), a new algorithm, and its associated code, for determining precise stellar atmospheric parameters and their uncertainties from high-resolution echelle spectra of FGK-type stars. zaspe estimates stellar atmospheric parameters by comparing the observed spectrum against a grid of synthetic spectra only in the most sensitive spectral zones to changes in the atmospheric parameters. Realistic uncertainties in the parameters are computed from the data itself, by taking into account the systematic mismatches between the observed spectrum and the best-fitting synthetic one. The covariances between the parameters are also estimated in the process. zaspe can in principle use any pre-calculated grid of synthetic spectra, but unbiased grids are required to obtain accurate parameters. We tested the performance of two existing libraries, and we concluded that neither is suitable for computing precise atmospheric parameters. We describe a process to synthesize a new library of synthetic spectra that was found to generate consistent results when compared with parameters obtained with different methods (interferometry, asteroseismology, equivalent widths).
Unit cell geometry of multiaxial preforms for structural composites
NASA Technical Reports Server (NTRS)
Ko, Frank; Lei, Charles; Rahman, Anisur; Du, G. W.; Cai, Yun-Jia
1993-01-01
The objective of this study is to investigate the yarn geometry of multiaxial preforms. The importance of multiaxial preforms for structural composites is well recognized by the industry but, to exploit their full potential, engineering design rules must be established. This study is a step in that direction. In this work the preform geometry for knitted and braided preforms was studied by making a range of well designed samples and studying them by photo microscopy. The structural geometry of the preforms is related to the processing parameters. Based on solid modeling and B-spline methodology a software package is developed. This computer code enables real time structural representations of complex fiber architecture based on the rule of preform manufacturing. The code has the capability of zooming and section plotting. These capabilities provide a powerful means to study the effect of processing variables on the preform geometry. the code also can be extended to an auto mesh generator for downstream structural analysis using finite element method. This report is organized into six sections. In the first section the scope and background of this work is elaborated. In section two the unit cell geometries of braided and multi-axial warp knitted preforms is discussed. The theoretical frame work of yarn path modeling and solid modeling is presented in section three. The thin section microscopy carried out to observe the structural geometry of the preforms is the subject in section four. The structural geometry is related to the processing parameters in section five. Section six documents the implementation of the modeling techniques into the computer code MP-CAD. A user manual for the software is also presented here. The source codes and published papers are listed in the Appendices.
Biases in GNSS-Data Processing
NASA Astrophysics Data System (ADS)
Schaer, S. C.; Dach, R.; Lutz, S.; Meindl, M.; Beutler, G.
2010-12-01
Within the Global Positioning System (GPS) traditionally different types of pseudo-range measurements (P-code, C/A-code) are available on the first frequency that are tracked by the receivers with different technologies. For that reason, P1-C1 and P1-P2 Differential Code Biases (DCB) need to be considered in a GPS data processing with a mix of different receiver types. Since the Block IIR-M series of GPS satellites also provide C/A-code on the second frequency, P2-C2 DCB need to be added to the list of biases for maintenance. Potential quarter-cycle biases between different phase observables (specifically L2P and L2C) are another issue. When combining GNSS (currently GPS and GLONASS), careful consideration of inter-system biases (ISB) is indispensable, in particular when an adequate combination of individual GLONASS clock correction results from different sources (using, e.g., different software packages) is intended. Facing the GPS and GLONASS modernization programs and the upcoming GNSS, like the European Galileo and the Chinese Compass, an increasing number of types of biases is expected. The Center for Orbit Determination in Europe (CODE) is monitoring these GPS and GLONASS related biases for a long time based on RINEX files of the tracking network of the International GNSS Service (IGS) and in the frame of the data processing as one of the global analysis centers of the IGS. Within the presentation we give an overview on the stability of the biases based on the monitoring. Biases derived from different sources are compared. Finally, we give an outlook on the potential handling of such biases with the big variety of signals and systems expected in the future.
An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process
NASA Astrophysics Data System (ADS)
Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre
2015-02-01
This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.
NASA Astrophysics Data System (ADS)
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
Sarrafzadegan, Nizal; Kelishad, Roya; Rabiei, Katayoun; Abedi, Heidarali; Mohaseli, Khadijeh Fereydoun; Masooleh, Hasan Azaripour; Alavi, Mousa; Heidari, Gholamreza; Ghaffari, Mostafa; O’Loughlin, Jennifer
2012-01-01
Background: Iran is one of the countries that has ratified the World Health Organization Framework Convention of Tobacco Control (WHO-FCTC), and has implemented a series of tobacco control interventions including the Comprehensive Tobacco Control Law. Enforcement of this legislation and assessment of its outcome requires a dedicated evaluation system. This study aimed to develop a generic model to evaluate the implementation of the Comprehensive Tobacco Control Law in Iran that was provided based on WHO-FCTC articles. Materials and Methods: Using a grounded theory approach, qualitative data were collected from 265 subjects in individual interviews and focus group discussions with policymakers who designed the legislation, key stakeholders, and members of the target community. In addition, field observations data in supermarkets/shops, restaurants, teahouses and coffee shops were collected. Data were analyzed in two stages through conceptual theoretical coding. Findings: Overall, 617 open codes were extracted from the data into tables; 72 level-3 codes were retained from the level-2 code series. Using a Model Met paradigm, the relationships between the components of each paradigm were depicted graphically. The evaluation model entailed three levels, namely: short-term results, process evaluation and long-term results. Conclusions: Central concept of the process of evaluation is that enforcing the law influences a variety of internal and environmental factors including legislative changes. These factors will be examined during the process evaluation and context evaluation. The current model can be applicable for providing FCTC evaluation tools across other jurisdictions. PMID:23833621
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
High Angular Momentum Halo Gas: A Feedback and Code-independent Prediction of LCDM
NASA Astrophysics Data System (ADS)
Stewart, Kyle R.; Maller, Ariyeh H.; Oñorbe, Jose; Bullock, James S.; Joung, M. Ryan; Devriendt, Julien; Ceverino, Daniel; Kereš, Dušan; Hopkins, Philip F.; Faucher-Giguère, Claude-André
2017-07-01
We investigate angular momentum acquisition in Milky Way-sized galaxies by comparing five high resolution zoom-in simulations, each implementing identical cosmological initial conditions but utilizing different hydrodynamic codes: Enzo, Art, Ramses, Arepo, and Gizmo-PSPH. Each code implements a distinct set of feedback and star formation prescriptions. We find that while many galaxy and halo properties vary between the different codes (and feedback prescriptions), there is qualitative agreement on the process of angular momentum acquisition in the galaxy’s halo. In all simulations, cold filamentary gas accretion to the halo results in ˜4 times more specific angular momentum in cold halo gas (λ cold ≳ 0.1) than in the dark matter halo. At z > 1, this inflow takes the form of inspiraling cold streams that are co-directional in the halo of the galaxy and are fueled, aligned, and kinematically connected to filamentary gas infall along the cosmic web. Due to the qualitative agreement among disparate simulations, we conclude that the buildup of high angular momentum halo gas and the presence of these inspiraling cold streams are robust predictions of Lambda Cold Dark Matter galaxy formation, though the detailed morphology of these streams is significantly less certain. A growing body of observational evidence suggests that this process is borne out in the real universe.
Problems and Processes in Medical Encounters: The CASES method of dialogue analysis
Laws, M. Barton; Taubin, Tatiana; Bezreh, Tanya; Lee, Yoojin; Beach, Mary Catherine; Wilson, Ira B.
2013-01-01
Objective To develop methods to reliably capture structural and dynamic temporal features of clinical interactions. Methods Observational study of 50 audio-recorded routine outpatient visits to HIV specialty clinics, using innovative analytic methods. The Comprehensive Analysis of the Structure of Encounters System (CASES) uses transcripts coded for speech acts, then imposes larger-scale structural elements: threads – the problems or issues addressed; and processes within threads –basic tasks of clinical care labeled Presentation, Information, Resolution (decision making) and Engagement (interpersonal exchange). Threads are also coded for the nature of resolution. Results 61% of utterances are in presentation processes. Provider verbal dominance is greatest in information and resolution processes, which also contain a high proportion of provider directives. About half of threads result in no action or decision. Information flows predominantly from patient to provider in presentation processes, and from provider to patient in information processes. Engagement is rare. Conclusions In this data, resolution is provider centered; more time for patient participation in resolution, or interpersonal engagement, would have to come from presentation. Practice Implications Awareness of the use of time in clinical encounters, and the interaction processes associated with various tasks, may help make clinical communication more efficient and effective. PMID:23391684
Problems and processes in medical encounters: the cases method of dialogue analysis.
Laws, M Barton; Taubin, Tatiana; Bezreh, Tanya; Lee, Yoojin; Beach, Mary Catherine; Wilson, Ira B
2013-05-01
To develop methods to reliably capture structural and dynamic temporal features of clinical interactions. Observational study of 50 audio-recorded routine outpatient visits to HIV specialty clinics, using innovative analytic methods. The comprehensive analysis of the structure of encounters system (CASES) uses transcripts coded for speech acts, then imposes larger-scale structural elements: threads--the problems or issues addressed; and processes within threads--basic tasks of clinical care labeled presentation, information, resolution (decision making) and Engagement (interpersonal exchange). Threads are also coded for the nature of resolution. 61% of utterances are in presentation processes. Provider verbal dominance is greatest in information and resolution processes, which also contain a high proportion of provider directives. About half of threads result in no action or decision. Information flows predominantly from patient to provider in presentation processes, and from provider to patient in information processes. Engagement is rare. In this data, resolution is provider centered; more time for patient participation in resolution, or interpersonal engagement, would have to come from presentation. Awareness of the use of time in clinical encounters, and the interaction processes associated with various tasks, may help make clinical communication more efficient and effective. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Code of Federal Regulations, 2013 CFR
2013-01-01
... hermetically sealed containers; closures; code marking; heat processing; incubation. 355.25 Section 355.25... processing and hermetically sealed containers; closures; code marking; heat processing; incubation. (a... storage and transportation as evidenced by the incubation test. (h) Lots of canned products shall be...
Code of Federal Regulations, 2012 CFR
2012-01-01
... hermetically sealed containers; closures; code marking; heat processing; incubation. 355.25 Section 355.25... processing and hermetically sealed containers; closures; code marking; heat processing; incubation. (a... storage and transportation as evidenced by the incubation test. (h) Lots of canned products shall be...
Code of Federal Regulations, 2011 CFR
2011-01-01
... hermetically sealed containers; closures; code marking; heat processing; incubation. 355.25 Section 355.25... processing and hermetically sealed containers; closures; code marking; heat processing; incubation. (a... storage and transportation as evidenced by the incubation test. (h) Lots of canned products shall be...
Code of Federal Regulations, 2014 CFR
2014-01-01
... hermetically sealed containers; closures; code marking; heat processing; incubation. 355.25 Section 355.25... processing and hermetically sealed containers; closures; code marking; heat processing; incubation. (a... storage and transportation as evidenced by the incubation test. (h) Lots of canned products shall be...
Code of Federal Regulations, 2010 CFR
2010-01-01
... hermetically sealed containers; closures; code marking; heat processing; incubation. 355.25 Section 355.25... processing and hermetically sealed containers; closures; code marking; heat processing; incubation. (a... storage and transportation as evidenced by the incubation test. (h) Lots of canned products shall be...
77 FR 17460 - Multistakeholder Process To Develop Consumer Data Privacy Codes of Conduct
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-26
.... 120214135-2203-02] RIN 0660-XA27 Multistakeholder Process To Develop Consumer Data Privacy Codes of Conduct... request for public comments on the multistakeholder process to develop consumer data privacy codes of...-multistakeholder-process without change. All personal identifying information (for example, name, address...
An ERP study of recognition memory for concrete and abstract pictures in school-aged children
Boucher, Olivier; Chouinard-Leclaire, Christine; Muckle, Gina; Westerlund, Alissa; Burden, Matthew J.; Jacobson, Sandra W.; Jacobson, Joseph L.
2016-01-01
Recognition memory for concrete, nameable pictures is typically faster and more accurate than for abstract pictures. A dual-coding account for these findings suggests that concrete pictures are processed into verbal and image codes, whereas abstract pictures are encoded in image codes only. Recognition memory relies on two successive and distinct processes, namely familiarity and recollection. Whether these two processes are similarly or differently affected by stimulus concreteness remains unknown. This study examined the effect of picture concreteness on visual recognition memory processes using event-related potentials (ERPs). In a sample of children involved in a longitudinal study, participants (N = 96; mean age = 11.3 years) were assessed on a continuous visual recognition memory task in which half the pictures were easily nameable, everyday concrete objects, and the other half were three-dimensional abstract, sculpture-like objects. Behavioral performance and ERP correlates of familiarity and recollection (respectively, the FN400 and P600 repetition effects) were measured. Behavioral results indicated faster and more accurate identification of concrete pictures as “new” or “old” (i.e., previously displayed) compared to abstract pictures. ERPs were characterised by a larger repetition effect, on the P600 amplitude, for concrete than for abstract images, suggesting a graded recollection process dependant on the type of material to be recollected. Topographic differences were observed within the FN400 latency interval, especially over anterior-inferior electrodes, with the repetition effect more pronounced and localized over the left hemisphere for concrete stimuli, potentially reflecting different neural processes underlying early processing of verbal/semantic and visual material in memory. PMID:27329352
Numerical study of shock-wave/boundary layer interactions in premixed hydrogen-air hypersonic flows
NASA Technical Reports Server (NTRS)
Yungster, Shaye
1991-01-01
A computational study of shock wave/boundary layer interactions involving premixed combustible gases, and the resulting combustion processes is presented. The analysis is carried out using a new fully implicit, total variation diminishing (TVD) code developed for solving the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. To accelerate the convergence of the basic iterative procedure, this code is combined with vector extrapolation methods. The chemical nonequilibrium processes are simulated by means of a finite-rate chemistry model for hydrogen-air combustion. Several validation test cases are presented and the results compared with experimental data or with other computational results. The code is then applied to study shock wave/boundary layer interactions in a ram accelerator configuration. Results indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outwards and downstream. At higher Mach numbers, spontaneous ignition in part of the boundary layer is observed, which eventually extends along the entire boundary layer at still higher values of the Mach number.
Numerical study of shock-wave/boundary layer interactions in premixed hydrogen-air hypersonic flows
NASA Technical Reports Server (NTRS)
Yungster, Shaye
1990-01-01
A computational study of shock wave/boundary layer interactions involving premixed combustible gases, and the resulting combustion processes is presented. The analysis is carried out using a new fully implicit, total variation diminishing (TVD) code developed for solving the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. To accelerate the convergence of the basic iterative procedure, this code is combined with vector extrapolation methods. The chemical nonequilibrium processes are simulated by means of a finite-rate chemistry model for hydrogen-air combustion. Several validation test cases are presented and the results compared with experimental data or with other computational results. The code is then applied to study shock wave/boundary layer interactions in a ram accelerator configuration. Results indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outwards and downstream. At higher Mach numbers, spontaneous ignition in part of the boundary layer is observed, which eventually extends along the entire boundary layer at still higher values of the Mach number.
Venugopal, Divya; Rafi, Aboobacker Mohamed; Innah, Susheela Jacob; Puthayath, Bibin T.
2017-01-01
BACKGROUND: Process Excellence is a value based approach and focuses on standardizing work processes by eliminating the non-value added processes, identify process improving methodologies and maximize capacity and expertise of the staff. AIM AND OBJECTIVES: To Evaluate the utility of Process Excellence Tools in improving Donor Flow Management in a Tertiary care Hospital by studying the current state of donor movement within the blood bank and providing recommendations for eliminating the wait times and to improve the process and workflow. MATERIALS AND METHODS: The work was done in two phases; The First Phase comprised of on-site observations with the help of an expert trained in Process Excellence Methodology who observed and documented various aspects of donor flow, donor turn around time, total staff details and operator process flow. The Second Phase comprised of constitution of a Team to analyse the data collected. The analyzed data along with the recommendations were presented before an expert hospital committee and the management. RESULTS: Our analysis put forward our strengths and identified potential problems. Donor wait time was reduced by 50% after lean due to better donor management with reorganization of the infrastructure of the donor area. Receptionist tracking showed that 62% of the total time the staff wastes in walking and 22% in other non-value added activities. Defining Duties for each staff reduced the time spent by them in non-value added activities. Implementation of the token system, generation of unique identification code for donors and bar code labeling of the tubes and bags are among the other recommendations. CONCLUSION: Process Excellence is not a programme; it's a culture that transforms an organization and improves its Quality and Efficiency through new attitudes, elimination of wastes and reduction in costs. PMID:28970681
Venugopal, Divya; Rafi, Aboobacker Mohamed; Innah, Susheela Jacob; Puthayath, Bibin T
2017-01-01
Process Excellence is a value based approach and focuses on standardizing work processes by eliminating the non-value added processes, identify process improving methodologies and maximize capacity and expertise of the staff. To Evaluate the utility of Process Excellence Tools in improving Donor Flow Management in a Tertiary care Hospital by studying the current state of donor movement within the blood bank and providing recommendations for eliminating the wait times and to improve the process and workflow. The work was done in two phases; The First Phase comprised of on-site observations with the help of an expert trained in Process Excellence Methodology who observed and documented various aspects of donor flow, donor turn around time, total staff details and operator process flow. The Second Phase comprised of constitution of a Team to analyse the data collected. The analyzed data along with the recommendations were presented before an expert hospital committee and the management. Our analysis put forward our strengths and identified potential problems. Donor wait time was reduced by 50% after lean due to better donor management with reorganization of the infrastructure of the donor area. Receptionist tracking showed that 62% of the total time the staff wastes in walking and 22% in other non-value added activities. Defining Duties for each staff reduced the time spent by them in non-value added activities. Implementation of the token system, generation of unique identification code for donors and bar code labeling of the tubes and bags are among the other recommendations. Process Excellence is not a programme; it's a culture that transforms an organization and improves its Quality and Efficiency through new attitudes, elimination of wastes and reduction in costs.
CAA modeling of helicopter main rotor in hover
NASA Astrophysics Data System (ADS)
Kusyumov, Alexander N.; Mikhailov, Sergey A.; Batrakov, Andrey S.; Kusyumov, Sergey A.; Barakos, George
In this work rotor aeroacoustics in hover is considered. Farfield observers are used and the nearfield flow parameters are obtained using the in house HMB and commercial Fluent CFD codes (identical hexa-grids are used for both solvers). Farfield noise at a remote observer position is calculated at post processing stage using FW-H solver implemented in Fluent and HMB. The main rotor of the UH-1H helicopter is considered as a test case for comparison to experimental data. The sound pressure level is estimated for different rotor blade collectives and observation angles.
Gilmore-Bykovskyi, Andrea L.
2015-01-01
Mealtime behavioral symptoms are distressing and frequently interrupt eating for the individual experiencing them and others in the environment. In order to enable identification of potential antecedents to mealtime behavioral symptoms, a computer-assisted coding scheme was developed to measure caregiver person-centeredness and behavioral symptoms for nursing home residents with dementia during mealtime interactions. The purpose of this pilot study was to determine the acceptability and feasibility of procedures for video-capturing naturally-occurring mealtime interactions between caregivers and residents with dementia, to assess the feasibility, ease of use, and inter-observer reliability of the coding scheme, and to explore the clinical utility of the coding scheme. Trained observers coded 22 observations. Data collection procedures were feasible and acceptable to caregivers, residents and their legally authorized representatives. Overall, the coding scheme proved to be feasible, easy to execute and yielded good to very good inter-observer agreement following observer re-training. The coding scheme captured clinically relevant, modifiable antecedents to mealtime behavioral symptoms, but would be enhanced by the inclusion of measures for resident engagement and consolidation of items for measuring caregiver person-centeredness that co-occurred and were difficult for observers to distinguish. PMID:25784080
Characterizing Mathematics Classroom Practice: Impact of Observation and Coding Choices
ERIC Educational Resources Information Center
Ing, Marsha; Webb, Noreen M.
2012-01-01
Large-scale observational measures of classroom practice increasingly focus on opportunities for student participation as an indicator of instructional quality. Each observational measure necessitates making design and coding choices on how to best measure student participation. This study investigated variations of coding approaches that may be…
Pupil measures of alertness and mental load
NASA Technical Reports Server (NTRS)
Backs, Richard W.; Walrath, Larry C.
1988-01-01
A study of eight adults given active and passive search tasks showed that evoked pupillary response was sensitive to information processing demands. In particular, large pupillary diameter was observed in the active search condition where subjects were actively processing information relevant to task performance, as opposed to the passive search (control) condition where subjects passively viewed the displays. However, subjects may have simply been more aroused in the active search task. Of greater importance was that larger pupillary diameter, corresponding to longer search time, was observed for noncoded than for color-coded displays in active search. In the control condition, pupil diameter was larger with the color displays. The data indicate potential usefulness of pupillary responses in evaluating the information processing requirements of visual displays.
Bohlin, Jon; Eldholm, Vegard; Pettersson, John H O; Brynildsrud, Ola; Snipen, Lars
2017-02-10
The core genome consists of genes shared by the vast majority of a species and is therefore assumed to have been subjected to substantially stronger purifying selection than the more mobile elements of the genome, also known as the accessory genome. Here we examine intragenic base composition differences in core genomes and corresponding accessory genomes in 36 species, represented by the genomes of 731 bacterial strains, to assess the impact of selective forces on base composition in microbes. We also explore, in turn, how these results compare with findings for whole genome intragenic regions. We found that GC content in coding regions is significantly higher in core genomes than accessory genomes and whole genomes. Likewise, GC content variation within coding regions was significantly lower in core genomes than in accessory genomes and whole genomes. Relative entropy in coding regions, measured as the difference between observed and expected trinucleotide frequencies estimated from mononucleotide frequencies, was significantly higher in the core genomes than in accessory and whole genomes. Relative entropy was positively associated with coding region GC content within the accessory genomes, but not within the corresponding coding regions of core or whole genomes. The higher intragenic GC content and relative entropy, as well as the lower GC content variation, observed in the core genomes is most likely associated with selective constraints. It is unclear whether the positive association between GC content and relative entropy in the more mobile accessory genomes constitutes signatures of selection or selective neutral processes.
Dissociating action-effect activation and effect-based response selection.
Schwarz, Katharina A; Pfister, Roland; Wirth, Robert; Kunde, Wilfried
2018-05-25
Anticipated action effects have been shown to govern action selection and initiation, as described in ideomotor theory, and they have also been demonstrated to determine crosstalk between different tasks in multitasking studies. Such effect-based crosstalk was observed not only in a forward manner (with a first task influencing performance in a following second task) but also in a backward manner (the second task influencing the preceding first task), suggesting that action effect codes can become activated prior to a capacity-limited processing stage often denoted as response selection. The process of effect-based response production, by contrast, has been proposed to be capacity-limited. These observations jointly suggest that effect code activation can occur independently of effect-based response production, though this theoretical implication has not been tested directly at present. We tested this hypothesis by employing a dual-task set-up in which we manipulated the ease of effect-based response production (via response-effect compatibility) in an experimental design that allows for observing forward and backward crosstalk. We observed robust crosstalk effects and response-effect compatibility effects alike, but no interaction between both effects. These results indicate that effect activation can occur in parallel for several tasks, independently of effect-based response production, which is confined to one task at a time. Copyright © 2018 Elsevier B.V. All rights reserved.
Is phonology bypassed in normal or dyslexic development?
Pennington, B F; Lefly, D L; Van Orden, G C; Bookman, M O; Smith, S D
1987-01-01
A pervasive assumption in most accounts of normal reading and spelling development is that phonological coding is important early in development but is subsequently superseded by faster, orthographic coding which bypasses phonology. We call this assumption, which derives from dual process theory, the developmental bypass hypothesis. The present study tests four specific predictions of the developmental bypass hypothesis by comparing dyslexics and nondyslexics from the same families in a cross-sectional design. The four predictions are: 1) That phonological coding skill develops early in normal readers and soon reaches asymptote, whereas orthographic coding skill has a protracted course of development; 2) that the correlation of adult reading or spelling performance with phonological coding skill is considerably less than the correlation with orthographic coding skill; 3) that dyslexics who are mainly deficient in phonological coding skill should be able to bypass this deficit and eventually close the gap in reading and spelling performance; and 4) that the greatest differences between dyslexics and developmental controls on measures of phonological coding skill should be observed early rather than late in development.None of the four predictions of the developmental bypass hypothesis were upheld. Phonological coding skill continued to develop in nondyslexics until adulthood. It accounted for a substantial (32-53 percent) portion of the variance in reading and spelling performance in adult nondyslexics, whereas orthographic coding skill did not account for a statistically reliable portion of this variance. The dyslexics differed little across age in phonological coding skill, but made linear progress in orthographic coding skill, surpassing spelling-age (SA) controls by adulthood. Nonetheless, they didnot close the gap in reading and spelling performance. Finally, dyslexics were significantly worse than SA (and Reading Age [RA]) controls in phonological coding skill only in adulthood.
ELF Sferics Observed at Large Distances
NASA Astrophysics Data System (ADS)
Dupree, N. A.; Moore, R. C.
2012-12-01
Model predictions of the ELF radio atmospheric generated by rocket-triggered lightning are compared with observations performed at at large (>1 Mm) distances. The ability to infer source characteristics using observations at great distances may prove to greatly enhance the understanding of lightning processes that are associated with the production of transient luminous events (TLEs) as well as other ionospheric effects associated with lightning. The modeling of the sferic waveform is carried out using a modified version of the Long Wavelength Propagation Capability (LWPC) code developed by the Naval Ocean Systems Center over a period of many years. LWPC is an inherently narrowband propagation code that has been modified to predict the broadband response of the Earth-ionosphere waveguide to an impulsive lightning flash while preserving the ability of LWPC to account for an inhomogeneous waveguide. ELF observations performed in Alaska and Antarctica during rocket-triggered lightning experiments at the International Center for Lightning Research and Testing (ICLRT) located at Camp Blanding, Florida are presented. The lightning current waveforms directly measured at the base of the lightning channel (at the ICLRT) are used together with LWPC to predict the sferic waveform observed at the receiver locations under various ionospheric conditions. This paper critically compares observations with model predictions.
Near-line Archive Data Mining at the Goddard Distributed Active Archive Center
NASA Astrophysics Data System (ADS)
Pham, L.; Mack, R.; Eng, E.; Lynnes, C.
2002-12-01
NASA's Earth Observing System (EOS) is generating immense volumes of data, in some cases too much to provide to users with data-intensive needs. As an alternative to moving the data to the user and his/her research algorithms, we are providing a means to move the algorithms to the data. The Near-line Archive Data Mining (NADM) system is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web data mining portal to the EOS Data and Information System (EOSDIS) data pool, a 50-TB online disk cache. The NADM web portal enables registered users to submit and execute data mining algorithm codes on the data in the EOSDIS data pool. A web interface allows the user to access the NADM system. The users first develops personalized data mining code on their home platform and then uploads them to the NADM system. The C, FORTRAN and IDL languages are currently supported. The user developed code is automatically audited for any potential security problems before it is installed within the NADM system and made available to the user. Once the code has been installed the user is provided a test environment where he/she can test the execution of the software against data sets of the user's choosing. When the user is satisfied with the results, he/she can promote their code to the "operational" environment. From here the user can interactively run his/her code on the data available in the EOSDIS data pool. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the EOSDIS data pool. The generated mined data products are then made available for FTP pickup. The NADM system uses the GES DAAC-developed Simple Scalable Script-based Science Processor (S4P) to automate tasks and perform the actual data processing. Users will also have the option of selecting a DAAC-provided data mining algorithm and using it to process the data of their choice.
NASA Astrophysics Data System (ADS)
Ott, S.
2011-07-01
(On behalf of all contributors to the Herschel mission) The Herschel Space Observatory, the fourth cornerstone mission in the ESA science program, was launched 14th of May 2009. With a 3.5 m telescope, it is the largest space telescope ever launched. Herschel's three instruments (HIFI, PACS, and SPIRE) perform photometry and spectroscopy in the 55-671 micron range and will deliver exciting science for the astronomical community during at least three years of routine observations. Starting October 2009 Herschel has been performing and processing observations in routine science mode. The development of the Herschel Data Processing System (HIPE) started nine years ago to support the data analysis for Instrument Level Tests. To fulfil the expectations of the astronomical community, additional resources were made available to implement a freely distributable Data Processing System capable of interactively and automatically reducing Herschel data at different processing levels. The system combines data retrieval, pipeline execution, data quality checking and scientific analysis in one single environment. HIPE is the user-friendly face of Herschel interactive Data Processing. The software is coded in Java and Jython to be platform independent and to avoid the need for commercial licenses. It is distributed under the GNU Lesser General Public License (LGPL), permitting everyone to access and to re-use its code. We will summarise the current capabilities of the Herschel Data Processing system, highlight how the Herschel Data Processing system supported the Herschel observatory to meet the challenges of this large project, give an overview about future development milestones and plans, and how the astronomical community can contribute to HIPE.
Nimptsch, Ulrike
2016-06-01
To investigate changes in comorbidity coding after the introduction of diagnosis related groups (DRGs) based prospective payment and whether trends differ regarding specific comorbidities. Nationwide administrative data (DRG statistics) from German acute care hospitals from 2005 to 2012. Observational study to analyze trends in comorbidity coding in patients hospitalized for common primary diseases and the effects on comorbidity-related risk of in-hospital death. Comorbidity coding was operationalized by Elixhauser diagnosis groups. The analyses focused on adult patients hospitalized for the primary diseases of heart failure, stroke, and pneumonia, as well as hip fracture. When focusing the total frequency of diagnosis groups per record, an increase in depth of coding was observed. Between-hospital variations in depth of coding were present throughout the observation period. Specific comorbidity increases were observed in 15 of the 31 diagnosis groups, and decreases in comorbidity were observed for 11 groups. In patients hospitalized for heart failure, shifts of comorbidity-related risk of in-hospital death occurred in nine diagnosis groups, in which eight groups were directed toward the null. Comorbidity-adjusted outcomes in longitudinal administrative data analyses may be biased by nonconstant risk over time, changes in completeness of coding, and between-hospital variations in coding. Accounting for such issues is important when the respective observation period coincides with changes in the reimbursement system or other conditions that are likely to alter clinical coding practice. © Health Research and Educational Trust.
Quantitative Profiling of Peptides from RNAs classified as non-coding
Prabakaran, Sudhakaran; Hemberg, Martin; Chauhan, Ruchi; Winter, Dominic; Tweedie-Cullen, Ry Y.; Dittrich, Christian; Hong, Elizabeth; Gunawardena, Jeremy; Steen, Hanno; Kreiman, Gabriel; Steen, Judith A.
2014-01-01
Only a small fraction of the mammalian genome codes for messenger RNAs destined to be translated into proteins, and it is generally assumed that a large portion of transcribed sequences - including introns and several classes of non-coding RNAs (ncRNAs) do not give rise to peptide products. A systematic examination of translation and physiological regulation of ncRNAs has not been conducted. Here, we use computational methods to identify the products of non-canonical translation in mouse neurons by analyzing unannotated transcripts in combination with proteomic data. This study supports the existence of non-canonical translation products from both intragenic and extragenic genomic regions, including peptides derived from anti-sense transcripts and introns. Moreover, the studied novel translation products exhibit temporal regulation similar to that of proteins known to be involved in neuronal activity processes. These observations highlight a potentially large and complex set of biologically regulated translational events from transcripts formerly thought to lack coding potential. PMID:25403355
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
A systems neurophysiology approach to voluntary event coding.
Petruo, Vanessa A; Stock, Ann-Kathrin; Münchau, Alexander; Beste, Christian
2016-07-15
Mechanisms responsible for the integration of perceptual events and appropriate actions (sensorimotor processes) have been subject to intense research. Different theoretical frameworks have been put forward with the "Theory of Event Coding (TEC)" being one of the most influential. In the current study, we focus on the concept of 'event files' within TEC and examine what sub-processes being dissociable by means of cognitive-neurophysiological methods are involved in voluntary event coding. This was combined with EEG source localization. We also introduce reward manipulations to delineate the neurophysiological sub-processes most relevant for performance variations during event coding. The results show that processes involved in voluntary event coding included predominantly stimulus categorization, feature unbinding and response selection, which were reflected by distinct neurophysiological processes (the P1, N2 and P3 ERPs). On a system's neurophysiological level, voluntary event-file coding is thus related to widely distributed parietal-medial frontal networks. Attentional selection processes (N1 ERP) turned out to be less important. Reward modulated stimulus categorization in parietal regions likely reflecting aspects of perceptual decision making but not in other processes. The perceptual categorization stage appears central for voluntary event-file coding. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Friedel, R. H. W.; Bourdarie, S.; Fennell, J.; Kanekal, S.; Cayton, T. E.
2004-01-01
The highly energetic electron environment in the inner magnetosphere (GEO inward) has received a lot of research attention in resent years, as the dynamics of relativistic electron acceleration and transport are not yet fully understood. These electrons can cause deep dielectric charging in any space hardware in the MEO to GEO region. We use a new and novel approach to obtain a global representation of the inner magnetospheric energetic electron environment, which can reproduce the absolute environment (flux) for any spacecraft orbit in that region to within a factor of 2 for the energy range of 100 KeV to 5 MeV electrons, for any levels of magnetospheric activity. We combine the extensive set of inner magnetospheric energetic electron observations available at Los Alamos with the physics based Salammbo transport code, using the data assimilation technique of "nudging". This in effect input in-situ data into the code and allows the diffusion mechanisms in the code to interpolate the data into regions and times of no data availability. We present here details of the methods used, both in the data assimilation process and in the necessary inter-calibration of the input data used. We will present sample runs of the model/data code and compare the results to test spacecraft data not used in the data assimilation process.
New methods to benchmark simulations of accreting black holes systems against observations
NASA Astrophysics Data System (ADS)
Markoff, Sera; Chatterjee, Koushik; Liska, Matthew; Tchekhovskoy, Alexander; Hesp, Casper; Ceccobello, Chiara; Russell, Thomas
2017-08-01
The field of black hole accretion has been significantly advanced by the use of complex ideal general relativistic magnetohydrodynamics (GRMHD) codes, now capable of simulating scales from the event horizon out to ~10^5 gravitational radii at high resolution. The challenge remains how to test these simulations against data, because the self-consistent treatment of radiation is still in its early days, and is complicated by dependence on non-ideal/microphysical processes not yet included in the codes. On the other extreme, a variety of phenomenological models (disk, corona, jet, wind) can well-describe spectra or variability signatures in a particular waveband, although often not both. To bring these two methodologies together, we need robust observational “benchmarks” that can be identified and studied in simulations. I will focus on one example of such a benchmark, from recent observational campaigns on black holes across the mass scale: the jet break. I will describe new work attempting to understand what drives this feature by searching for regions that share similar trends in terms of dependence on accretion power or magnetisation. Such methods can allow early tests of simulation assumptions and help pinpoint which regions will dominate the light production, well before full radiative processes are incorporated, and will help guide the interpretation of, e.g. Event Horizon Telescope data.
A flexible surface wetness sensor using a RFID technique.
Yang, Cheng-Hao; Chien, Jui-Hung; Wang, Bo-Yan; Chen, Ping-Hei; Lee, Da-Sheng
2008-02-01
This paper presents a flexible wetness sensor whose detection signal, converted to a binary code, is transmitted through radio-frequency (RF) waves from a radio-frequency identification integrated circuit (RFID IC) to a remote reader. The flexible sensor, with a fixed operating frequency of 13.56 MHz, contains a RFID IC and a sensor circuit that is fabricated on a flexible printed circuit board (FPCB) using a Micro-Electro-Mechanical-System (MEMS) process. The sensor circuit contains a comb-shaped sensing area surrounded by an octagonal antenna with a width of 2.7 cm. The binary code transmitted from the RFIC to the reader changes if the surface conditions of the detector surface changes from dry to wet. This variation in the binary code can be observed on a digital oscilloscope connected to the reader.
Developement of an Optimum Interpolation Analysis Method for the CYBER 205
NASA Technical Reports Server (NTRS)
Nestler, M. S.; Woollen, J.; Brin, Y.
1985-01-01
A state-of-the-art technique to assimilate the diverse observational database obtained during FGGE, and thus create initial conditions for numerical forecasts is described. The GLA optimum interpolation (OI) analysis method analyzes pressure, winds, and temperature at sea level, mixing ratio at six mandatory pressure levels up to 300 mb, and heights and winds at twelve levels up to 50 mb. Conversion to the CYBER 205 required a major re-write of the Amdahl OI code to take advantage of the CYBER vector processing capabilities. Structured programming methods were used to write the programs and this has resulted in a modular, understandable code. Among the contributors to the increased speed of the CYBER code are a vectorized covariance-calculation routine, an extremely fast matrix equation solver, and an innovative data search and sort technique.
Staggering of angular momentum distribution in fission
NASA Astrophysics Data System (ADS)
Tamagno, Pierre; Litaize, Olivier
2018-03-01
We review here the role of angular momentum distributions in the fission process. To do so the algorithm implemented in the FIFRELIN code [?] is detailed with special emphasis on the place of fission fragment angular momenta. The usual Rayleigh distribution used for angular momentum distribution is presented and the related model derivation is recalled. Arguments are given to justify why this distribution should not hold for low excitation energy of the fission fragments. An alternative ad hoc expression taking into account low-lying collectiveness is presented as has been implemented in the FIFRELIN code. Yet on observables currently provided by the code, no dramatic impact has been found. To quantify the magnitude of the impact of the low-lying staggering in the angular momentum distribution, a textbook case is considered for the decay of the 144Ba nucleus with low excitation energy.
NASA Technical Reports Server (NTRS)
Habbal, Shadia R.; Gurman, Joseph (Technical Monitor)
2003-01-01
Investigations of the physical processes responsible for the acceleration of the solar wind were pursued with the development of two new solar wind codes: a hybrid code and a 2-D MHD code. Hybrid simulations were performed to investigate the interaction between ions and parallel propagating low frequency ion cyclotron waves in a homogeneous plasma. In a low-beta plasma such as the solar wind plasma in the inner corona, the proton thermal speed is much smaller than the Alfven speed. Vlasov linear theory predicts that protons are not in resonance with low frequency ion cyclotron waves. However, non-linear effect makes it possible that these waves can strongly heat and accelerate protons. This study has important implications for study of the corona and the solar wind. Low frequency ion cyclotron waves or Alfven waves are commonly observed in the solar wind. Until now, it is believed that these waves are not able to heat the solar wind plasma unless some cascading processes transfer the energy of these waves to high frequency part. However, this study shows that these waves may directly heat and accelerate protons non-linearly. This process may play an important role in the coronal heating and the solar wind acceleration, at least in some parameter space.
Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis G; Atkins, David C; Narayanan, Shrikanth S
2015-01-01
The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies.
Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.
Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao
2017-07-01
In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical basis for the development of a quality traceability system of traditional Chinese medicine based on a two-dimensional code.
Data processing with microcode designed with source coding
McCoy, James A; Morrison, Steven E
2013-05-07
Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.
NASA Technical Reports Server (NTRS)
Rutishauser, David
2006-01-01
The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.
Olier, Ivan; Springate, David A.; Ashcroft, Darren M.; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos
2016-01-01
Background The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. Methods We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. Results We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. Conclusion We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists. PMID:26918439
Faculty Observables and Self-Reported Responsiveness to Academic Dishonesty
ERIC Educational Resources Information Center
Burrus, Robert T., Jr.; Jones, Adam T.; Sackley, William H.; Walker, Michael
2015-01-01
Prior to 2009, a mid-sized public institution in the southeast had a faculty-driven honor policy characterized by little education about the policy and no tracking of repeat offenders. An updated code, implemented in August of 2009, required that students sign an honor pledge, created a formal student honor board, and developed a process to track…
250 ms to Code for Action Affordance during Observation of Manipulable Objects
ERIC Educational Resources Information Center
Proverbio, Alice Mado; Adorni, Roberta; D'Aniello, Guido Edoardo
2011-01-01
It is well known that viewing graspable tools (but not other objects) activates motor-related brain regions, but the time course of affordance processing has remained relatively unexplored. In this study, EEG was continuously recorded from 128 scalp sites in 15 right-handed university students while they received stimuli in the form of 150…
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George
2017-09-01
This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.
Tang, Guo-Qing; Maxwell, E. Stuart
2008-01-01
The amphibian Xenopus provides a model organism for investigating microRNA expression during vertebrate embryogenesis and development. Searching available Xenopus genome databases using known human pre-miRNAs as query sequences, more than 300 genes encoding 142 Xenopus tropicalis miRNAs were identified. Analysis of Xenopus tropicalis miRNA genes revealed a predominate positioning within introns of protein-coding and nonprotein-coding RNA Pol II-transcribed genes. MiRNA genes were also located in pre-mRNA exons and positioned intergenically between known protein-coding genes. Many miRNA species were found in multiple locations and in more than one genomic context. MiRNA genes were also clustered throughout the genome, indicating the potential for the cotranscription and coordinate expression of miRNAs located in a given cluster. Northern blot analysis confirmed the expression of many identified miRNAs in both X. tropicalis and X. laevis. Comparison of X. tropicalis and X. laevis blots revealed comparable expression profiles, although several miRNAs exhibited species-specific expression in different tissues. More detailed analysis revealed that for some miRNAs, the tissue-specific expression profile of the pri-miRNA precursor was distinctly different from that of the mature miRNA profile. Differential miRNA precursor processing in both the nucleus and cytoplasm was implicated in the observed tissue-specific differences. These observations indicated that post-transcriptional processing plays an important role in regulating miRNA expression in the amphibian Xenopus. PMID:18032731
Activity of striatal neurons reflects social action and own reward.
Báez-Mendoza, Raymundo; Harris, Christopher J; Schultz, Wolfram
2013-10-08
Social interactions provide agents with the opportunity to earn higher benefits than when acting alone and contribute to evolutionary stable strategies. A basic requirement for engaging in beneficial social interactions is to recognize the actor whose movement results in reward. Despite the recent interest in the neural basis of social interactions, the neurophysiological mechanisms identifying the actor in social reward situations are unknown. A brain structure well suited for exploring this issue is the striatum, which plays a role in movement, reward, and goal-directed behavior. In humans, the striatum is involved in social processes related to reward inequity, donations to charity, and observational learning. We studied the neurophysiology of social action for reward in rhesus monkeys performing a reward-giving task. The behavioral data showed that the animals distinguished between their own and the conspecific's reward and knew which individual acted. Striatal neurons coded primarily own reward but rarely other's reward. Importantly, the activations occurred preferentially, and in approximately similar fractions, when either the own or the conspecific's action was followed by own reward. Other striatal neurons showed social action coding without reward. Some of the social action coding disappeared when the conspecific's role was simulated by a computer, confirming a social rather than observational relationship. These findings demonstrate a role of striatal neurons in identifying the social actor and own reward in a social setting. These processes may provide basic building blocks underlying the brain's function in social interactions.
NASA Astrophysics Data System (ADS)
Li, Xingxing; Ge, Maaorong; Li, Xin; Zhang, Xiuaohong; Wu, Mingkui; Wickert, Jens; Schuh, Harald
2017-04-01
The rapid development of multi-constellation GNSSs (Global Navigation Satellite Systems, e.g., BeiDou, Galileo, GLONASS, GPS) and the IGS (International GNSS Service) Multi-GNSS Experiment (MGEX) bring great opportunities and challenges for real-time precise positioning service. In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the Multi-GNSS Experiment (MGEX) and International GNSS Service (IGS) data streams including stations all over the world. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70%, while the positioning accuracy is improved by about 25%. The successful launch of five new-generation satellites of the Chinese BeiDou Navigation Satellite System (BDS-3) marks a significant step in expanding BeiDou into a navigation system with global coverage. We present an initial characterization and performance assessment for these new-generation BeiDou-3 satellites and their signals. The characteristics of the B1C, B1I, B2a, B2b and B3I signals are evaluated in terms of observed carrier-to-noise density ratio, pseudorange multipath and noise, triple-frequency carrier phase ionosphere-free and geometry-free combination, and double-differenced carrier phase and code residuals. With respect to BeiDou-2 satellites, the analysis of code multipath shows that the elevation-dependent code biases, which have been previously identified to exist in the code observations of BeiDou-2 satellites, seem to be not obvious for all the available signals of new-generation BeiDou-3 satellites. This will significantly benefit precise applications that resolve wide-lane ambiguity based on Melbourne-Wübbena (MW) linear combinations and other applications such as single-frequency Precise Point Positioning (PPP) based on the ionosphere free code-carrier combinations. With regard to the triple-frequency carrier phase ionosphere-free and geometry-free combinations, it is found that different from BeiDou-2 and GPS Block IIF satellites, no apparent bias variations could be observed in all the new-generation BeiDou-3 satellites, which show a good consistency of the new-generation BeiDou-3 signals. The absence of such triple-frequency biases will make it convenient for the future processing of multi-frequency PPP using observations from new-generation BeiDou-3 satellites.
NASA Astrophysics Data System (ADS)
Wang, Zhi-peng; Zhang, Shuai; Liu, Hong-zhao; Qin, Yi
2014-12-01
Based on phase retrieval algorithm and QR code, a new optical encryption technology that only needs to record one intensity distribution is proposed. In this encryption process, firstly, the QR code is generated from the information to be encrypted; and then the generated QR code is placed in the input plane of 4-f system to have a double random phase encryption. For only one intensity distribution in the output plane is recorded as the ciphertext, the encryption process is greatly simplified. In the decryption process, the corresponding QR code is retrieved using phase retrieval algorithm. A priori information about QR code is used as support constraint in the input plane, which helps solve the stagnation problem. The original information can be recovered without distortion by scanning the QR code. The encryption process can be implemented either optically or digitally, and the decryption process uses digital method. In addition, the security of the proposed optical encryption technology is analyzed. Theoretical analysis and computer simulations show that this optical encryption system is invulnerable to various attacks, and suitable for harsh transmission conditions.
Validation: Codes to compare simulation data to various observations
NASA Astrophysics Data System (ADS)
Cohn, J. D.
2017-02-01
Validation provides codes to compare several observations to simulated data with stellar mass and star formation rate, simulated data stellar mass function with observed stellar mass function from PRIMUS or SDSS-GALEX in several redshift bins from 0.01-1.0, and simulated data B band luminosity function with observed stellar mass function, and to create plots for various attributes, including stellar mass functions, and stellar mass to halo mass. These codes can model predictions (in some cases alongside observational data) to test other mock catalogs.
The large-scale environment from cosmological simulations - I. The baryonic cosmic web
NASA Astrophysics Data System (ADS)
Cui, Weiguang; Knebe, Alexander; Yepes, Gustavo; Yang, Xiaohu; Borgani, Stefano; Kang, Xi; Power, Chris; Staveley-Smith, Lister
2018-01-01
Using a series of cosmological simulations that includes one dark-matter-only (DM-only) run, one gas cooling-star formation-supernova feedback (CSF) run and one that additionally includes feedback from active galactic nuclei (AGNs), we classify the large-scale structures with both a velocity-shear-tensor code (VWEB) and a tidal-tensor code (PWEB). We find that the baryonic processes have almost no impact on large-scale structures - at least not when classified using aforementioned techniques. More importantly, our results confirm that the gas component alone can be used to infer the filamentary structure of the universe practically un-biased, which could be applied to cosmology constraints. In addition, the gas filaments are classified with its velocity (VWEB) and density (PWEB) fields, which can theoretically connect to the radio observations, such as H I surveys. This will help us to bias-freely link the radio observations with dark matter distributions at large scale.
Underworld results as a triple (shopping list, posterior, priors)
NASA Astrophysics Data System (ADS)
Quenette, S. M.; Moresi, L. N.; Abramson, D.
2013-12-01
When studying long-term lithosphere deformation and other such large-scale, spatially distinct and behaviour rich problems, there is a natural trade-off between the meaning of a model, the observations used to validate the model and the ability to compute over this space. For example, many models of varying lithologies, rheological properties and underlying physics may reasonably match (or not match) observables. To compound this problem, each realisation is computationally intensive, requiring high resolution, algorithm tuning and code tuning to contemporary computer hardware. It is often intractable to use sampling based assimilation methods, but with better optimisation, the window of tractability becomes wider. The ultimate goal is to find a sweet-spot where a formal assimilation method is used, and where a model affines to observations. Its natural to think of this as an inverse problem, in which the underlying physics may be fixed and the rheological properties and possibly the lithologies themselves are unknown. What happens when we push this approach and treat some portion of the underlying physics as an unknown? At its extreme this is an intractable problem. However, there is an analogy here with how we develop software for these scientific problems. What happens when we treat the changing part of a largely complete code as an unknown, where the changes are working towards this sweet-spot? When posed as a Bayesian inverse problem the result is a triple - the model changes, the real priors and the real posterior. Not only does this give meaning to the process by which a code changes, it forms a mathematical bridge from an inverse problem to compiler optimisations given such changes. As a stepping stone example we show a regional scale heat flow model with constraining observations, and the inverse process including increasingly complexity in the software. The implementation uses Underworld-GT (Underworld plus research extras to import geology and export geothermic measures, etc). Underworld uses StGermain an early (partial) implementation of the theories described here.
Technology Infusion of CodeSonar into the Space Network Ground Segment
NASA Technical Reports Server (NTRS)
Benson, Markland J.
2009-01-01
This slide presentation reviews the applicability of CodeSonar to the Space Network software. CodeSonar is a commercial off the shelf system that analyzes programs written in C, C++ or Ada for defects in the code. Software engineers use CodeSonar results as an input to the existing source code inspection process. The study is focused on large scale software developed using formal processes. The systems studied are mission critical in nature but some use commodity computer systems.
Deforestation and Carbon Loss in Southwest Amazonia: Impact of Brazil's Revised Forest Code
NASA Astrophysics Data System (ADS)
Roriz, Pedro Augusto Costa; Yanai, Aurora Miho; Fearnside, Philip Martin
2017-09-01
In 2012 Brazil's National Congress altered the country's Forest Code, decreasing various environmental protections in the set of regulations governing forests. This suggests consequences in increased deforestation and emissions of greenhouse gases and in decreased protection of fragile ecosystems. To ascertain the effects, a simulation was run to the year 2025 for the municipality (county) of Boca do Acre, Amazonas state, Brazil. A baseline scenario considered historical behavior (which did not respect the Forest Code), while two scenarios considered full compliance with the old Forest Code (Law 4771/1965) and the current Code (Law 12,651/2012) regarding the protection of "areas of permanent preservation" (APPs) along the edges of watercourses. The models were parameterized from satellite imagery and simulated using Dinamica-EGO software. Deforestation actors and processes in the municipality were observed in loco in 2012. Carbon emissions and loss of forest by 2025 were computed in the three simulation scenarios. There was a 10% difference in the loss of carbon stock and of forest between the scenarios with the two versions of the Forest Code. The baseline scenario showed the highest loss of carbon stocks and the highest increase in annual emissions. The greatest damage was caused by not protecting wetlands and riparian zones.
The origins and evolutionary history of human non-coding RNA regulatory networks.
Sherafatian, Masih; Mowla, Seyed Javad
2017-04-01
The evolutionary history and origin of the regulatory function of animal non-coding RNAs are not well understood. Lack of conservation of long non-coding RNAs and small sizes of microRNAs has been major obstacles in their phylogenetic analysis. In this study, we tried to shed more light on the evolution of ncRNA regulatory networks by changing our phylogenetic strategy to focus on the evolutionary pattern of their protein coding targets. We used available target databases of miRNAs and lncRNAs to find their protein coding targets in human. We were able to recognize evolutionary hallmarks of ncRNA targets by phylostratigraphic analysis. We found the conventional 3'-UTR and lesser known 5'-UTR targets of miRNAs to be enriched at three consecutive phylostrata. Firstly, in eukaryata phylostratum corresponding to the emergence of miRNAs, our study revealed that miRNA targets function primarily in cell cycle processes. Moreover, the same overrepresentation of the targets observed in the next two consecutive phylostrata, opisthokonta and eumetazoa, corresponded to the expansion periods of miRNAs in animals evolution. Coding sequence targets of miRNAs showed a delayed rise at opisthokonta phylostratum, compared to the 3' and 5' UTR targets of miRNAs. LncRNA regulatory network was the latest to evolve at eumetazoa.
Deforestation and Carbon Loss in Southwest Amazonia: Impact of Brazil's Revised Forest Code.
Roriz, Pedro Augusto Costa; Yanai, Aurora Miho; Fearnside, Philip Martin
2017-09-01
In 2012 Brazil's National Congress altered the country's Forest Code, decreasing various environmental protections in the set of regulations governing forests. This suggests consequences in increased deforestation and emissions of greenhouse gases and in decreased protection of fragile ecosystems. To ascertain the effects, a simulation was run to the year 2025 for the municipality (county) of Boca do Acre, Amazonas state, Brazil. A baseline scenario considered historical behavior (which did not respect the Forest Code), while two scenarios considered full compliance with the old Forest Code (Law 4771/1965) and the current Code (Law 12,651/2012) regarding the protection of "areas of permanent preservation" (APPs) along the edges of watercourses. The models were parameterized from satellite imagery and simulated using Dinamica-EGO software. Deforestation actors and processes in the municipality were observed in loco in 2012. Carbon emissions and loss of forest by 2025 were computed in the three simulation scenarios. There was a 10% difference in the loss of carbon stock and of forest between the scenarios with the two versions of the Forest Code. The baseline scenario showed the highest loss of carbon stocks and the highest increase in annual emissions. The greatest damage was caused by not protecting wetlands and riparian zones.
An ERP study of recognition memory for concrete and abstract pictures in school-aged children.
Boucher, Olivier; Chouinard-Leclaire, Christine; Muckle, Gina; Westerlund, Alissa; Burden, Matthew J; Jacobson, Sandra W; Jacobson, Joseph L
2016-08-01
Recognition memory for concrete, nameable pictures is typically faster and more accurate than for abstract pictures. A dual-coding account for these findings suggests that concrete pictures are processed into verbal and image codes, whereas abstract pictures are encoded in image codes only. Recognition memory relies on two successive and distinct processes, namely familiarity and recollection. Whether these two processes are similarly or differently affected by stimulus concreteness remains unknown. This study examined the effect of picture concreteness on visual recognition memory processes using event-related potentials (ERPs). In a sample of children involved in a longitudinal study, participants (N=96; mean age=11.3years) were assessed on a continuous visual recognition memory task in which half the pictures were easily nameable, everyday concrete objects, and the other half were three-dimensional abstract, sculpture-like objects. Behavioral performance and ERP correlates of familiarity and recollection (respectively, the FN400 and P600 repetition effects) were measured. Behavioral results indicated faster and more accurate identification of concrete pictures as "new" or "old" (i.e., previously displayed) compared to abstract pictures. ERPs were characterized by a larger repetition effect, on the P600 amplitude, for concrete than for abstract images, suggesting a graded recollection process dependent on the type of material to be recollected. Topographic differences were observed within the FN400 latency interval, especially over anterior-inferior electrodes, with the repetition effect more pronounced and localized over the left hemisphere for concrete stimuli, potentially reflecting different neural processes underlying early processing of verbal/semantic and visual material in memory. Copyright © 2016 Elsevier B.V. All rights reserved.
Coding stimulus amplitude by correlated neural activity
NASA Astrophysics Data System (ADS)
Metzen, Michael G.; Ávila-Åkerberg, Oscar; Chacron, Maurice J.
2015-04-01
While correlated activity is observed ubiquitously in the brain, its role in neural coding has remained controversial. Recent experimental results have demonstrated that correlated but not single-neuron activity can encode the detailed time course of the instantaneous amplitude (i.e., envelope) of a stimulus. These have furthermore demonstrated that such coding required and was optimal for a nonzero level of neural variability. However, a theoretical understanding of these results is still lacking. Here we provide a comprehensive theoretical framework explaining these experimental findings. Specifically, we use linear response theory to derive an expression relating the correlation coefficient to the instantaneous stimulus amplitude, which takes into account key single-neuron properties such as firing rate and variability as quantified by the coefficient of variation. The theoretical prediction was in excellent agreement with numerical simulations of various integrate-and-fire type neuron models for various parameter values. Further, we demonstrate a form of stochastic resonance as optimal coding of stimulus variance by correlated activity occurs for a nonzero value of noise intensity. Thus, our results provide a theoretical explanation of the phenomenon by which correlated but not single-neuron activity can code for stimulus amplitude and how key single-neuron properties such as firing rate and variability influence such coding. Correlation coding by correlated but not single-neuron activity is thus predicted to be a ubiquitous feature of sensory processing for neurons responding to weak input.
Software Certification - Coding, Code, and Coders
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
Emmorey, Karen; Petrich, Jennifer; Gollan, Tamar H.
2012-01-01
Bilinguals who are fluent in American Sign Language (ASL) and English often produce code-blends - simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization times (Experiment 2) for code-blends versus ASL signs and English words produced alone. In production, code-blending did not slow lexical retrieval for ASL and actually facilitated access to low-frequency signs. However, code-blending delayed speech production because bimodal bilinguals synchronized English and ASL lexical onsets. In comprehension, code-blending speeded access to both languages. Bimodal bilinguals’ ability to produce code-blends without any cost to ASL implies that the language system either has (or can develop) a mechanism for switching off competition to allow simultaneous production of close competitors. Code-blend facilitation effects during comprehension likely reflect cross-linguistic (and cross-modal) integration at the phonological and/or semantic levels. The absence of any consistent processing costs for code-blending illustrates a surprising limitation on dual-task costs and may explain why bimodal bilinguals code-blend more often than they code-switch. PMID:22773886
A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage scheme.
Pongpirul, Krit; Walker, Damian G; Winch, Peter J; Robinson, Courtland
2011-04-08
In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.
A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme
2011-01-01
Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors. PMID:21477310
Studies of particle wake potentials in plasmas
NASA Astrophysics Data System (ADS)
Ellis, Ian N.; Graziani, Frank R.; Glosli, James N.; Strozzi, David J.; Surh, Michael P.; Richards, David F.; Decyk, Viktor K.; Mori, Warren B.
2011-09-01
A detailed understanding of electron stopping and scattering in plasmas with variable values for the number of particles within a Debye sphere is still not at hand. Presently, there is some disagreement in the literature concerning the proper description of these processes. Theoretical models assume electrostatic (Coulomb force) interactions between particles and neglect magnetic effects. Developing and validating proper descriptions requires studying the processes using first-principle plasma simulations. We are using the particle-particle particle-mesh (PPPM) code ddcMD and the particle-in-cell (PIC) code BEPS to perform these simulations. As a starting point in our study, we examine the wake of a particle passing through a plasma in 3D electrostatic simulations performed with ddcMD and BEPS. In this paper, we compare the wakes observed in these simulations with each other and predictions from collisionless kinetic theory. The relevance of the work to Fast Ignition is discussed.
Hydrodynamic Studies of Turbulent AGN Tori
NASA Astrophysics Data System (ADS)
Schartmann, M.; Meisenheimer, K.; Klahr, H.; Camenzind, M.; Wolf, S.; Henning, Th.; Burkert, A.; Krause, M.
Recently, the MID-infrared Interferometric instrument (MIDI) at the VLTI has shown that dust tori in the two nearby Seyfert galaxies NGC 1068 and the Circinus galaxy are geometrically thick and can be well described by a thin, warm central disk, surrounded by a colder and fluffy torus component. By carrying out hydrodynamical simulations with the help of the TRAMP code (Klahr et al. 1999), we follow the evolution of a young nuclear star cluster in terms of discrete mass-loss and energy injection from stellar processes. This naturally leads to a filamentary large scale torus component, where cold gas is able to flow radially inwards. The filaments join into a dense and very turbulent disk structure. In a post-processing step, we calculate spectral energy distributions and images with the 3D radiative transfer code MC3D Wolf (2003) and compare them to observations. Turbulence in the dense disk component is investigated in a separate project.
Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures
Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314
High Angular Momentum Halo Gas: A Feedback and Code-independent Prediction of LCDM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Kyle R.; Maller, Ariyeh H.; Oñorbe, Jose
We investigate angular momentum acquisition in Milky Way-sized galaxies by comparing five high resolution zoom-in simulations, each implementing identical cosmological initial conditions but utilizing different hydrodynamic codes: Enzo, Art, Ramses, Arepo, and Gizmo-PSPH. Each code implements a distinct set of feedback and star formation prescriptions. We find that while many galaxy and halo properties vary between the different codes (and feedback prescriptions), there is qualitative agreement on the process of angular momentum acquisition in the galaxy’s halo. In all simulations, cold filamentary gas accretion to the halo results in ∼4 times more specific angular momentum in cold halo gas (more » λ {sub cold} ≳ 0.1) than in the dark matter halo. At z > 1, this inflow takes the form of inspiraling cold streams that are co-directional in the halo of the galaxy and are fueled, aligned, and kinematically connected to filamentary gas infall along the cosmic web. Due to the qualitative agreement among disparate simulations, we conclude that the buildup of high angular momentum halo gas and the presence of these inspiraling cold streams are robust predictions of Lambda Cold Dark Matter galaxy formation, though the detailed morphology of these streams is significantly less certain. A growing body of observational evidence suggests that this process is borne out in the real universe.« less
Replacing the IRAF/PyRAF Code-base at STScI: The Advanced Camera for Surveys (ACS)
NASA Astrophysics Data System (ADS)
Lucas, Ray A.; Desjardins, Tyler D.; STScI ACS (Advanced Camera for Surveys) Team
2018-06-01
IRAF/PyRAF are no longer viable on the latest hardware often used by HST observers, therefore STScI no longer actively supports IRAF or PyRAF for most purposes. STScI instrument teams are in the process of converting all of our data processing and analysis code from IRAF/PyRAF to Python, including our calibration reference file pipelines and data reduction software. This is exemplified by our latest ACS Data Handbook, version 9.0, which was recently published in February 2018. Examples of IRAF and PyRAF commands have now been replaced by code blocks in Python, with references linked to documentation on how to download and install the latest Python software via Conda and AstroConda. With the temporary exception of the ACS slitless spectroscopy tool aXe, all ACS-related software is now independent of IRAF/PyRAF. A concerted effort has been made across STScI divisions to help the astronomical community transition from IRAF/PyRAF to Python, with tools such as Python Jupyter notebooks being made to give users workable examples. In addition to our code changes, the new ACS data handbook discusses the latest developments in charge transfer efficiency (CTE) correction, bias de-striping, and updates to the creation and format of calibration reference files among other topics.
Morrison, Zoe; Fernando, Bernard; Kalra, Dipak; Cresswell, Kathrin; Sheikh, Aziz
2014-01-01
We aimed to explore stakeholder views, attitudes, needs, and expectations regarding likely benefits and risks resulting from increased structuring and coding of clinical information within electronic health records (EHRs). Qualitative investigation in primary and secondary care and research settings throughout the UK. Data were derived from interviews, expert discussion groups, observations, and relevant documents. Participants (n=70) included patients, healthcare professionals, health service commissioners, policy makers, managers, administrators, systems developers, researchers, and academics. Four main themes arose from our data: variations in documentation practice; patient care benefits; secondary uses of information; and informing and involving patients. We observed a lack of guidelines, co-ordination, and dissemination of best practice relating to the design and use of information structures. While we identified immediate benefits for direct care and secondary analysis, many healthcare professionals did not see the relevance of structured and/or coded data to clinical practice. The potential for structured information to increase patient understanding of their diagnosis and treatment contrasted with concerns regarding the appropriateness of coded information for patients. The design and development of EHRs requires the capture of narrative information to reflect patient/clinician communication and computable data for administration and research purposes. Increased structuring and/or coding of EHRs therefore offers both benefits and risks. Documentation standards within clinical guidelines are likely to encourage comprehensive, accurate processing of data. As data structures may impact upon clinician/patient interactions, new models of documentation may be necessary if EHRs are to be read and authored by patients.
Morrison, Zoe; Fernando, Bernard; Kalra, Dipak; Cresswell, Kathrin; Sheikh, Aziz
2014-01-01
Objective We aimed to explore stakeholder views, attitudes, needs, and expectations regarding likely benefits and risks resulting from increased structuring and coding of clinical information within electronic health records (EHRs). Materials and methods Qualitative investigation in primary and secondary care and research settings throughout the UK. Data were derived from interviews, expert discussion groups, observations, and relevant documents. Participants (n=70) included patients, healthcare professionals, health service commissioners, policy makers, managers, administrators, systems developers, researchers, and academics. Results Four main themes arose from our data: variations in documentation practice; patient care benefits; secondary uses of information; and informing and involving patients. We observed a lack of guidelines, co-ordination, and dissemination of best practice relating to the design and use of information structures. While we identified immediate benefits for direct care and secondary analysis, many healthcare professionals did not see the relevance of structured and/or coded data to clinical practice. The potential for structured information to increase patient understanding of their diagnosis and treatment contrasted with concerns regarding the appropriateness of coded information for patients. Conclusions The design and development of EHRs requires the capture of narrative information to reflect patient/clinician communication and computable data for administration and research purposes. Increased structuring and/or coding of EHRs therefore offers both benefits and risks. Documentation standards within clinical guidelines are likely to encourage comprehensive, accurate processing of data. As data structures may impact upon clinician/patient interactions, new models of documentation may be necessary if EHRs are to be read and authored by patients. PMID:24186957
Sparse representation-based image restoration via nonlocal supervised coding
NASA Astrophysics Data System (ADS)
Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng
2016-10-01
Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.
LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations
NASA Astrophysics Data System (ADS)
Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton
2016-12-01
Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimization of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the Sloan Lens ACS Survey lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.
Creating Synthetic Coronal Observational Data From MHD Models: The Forward Technique
NASA Technical Reports Server (NTRS)
Rachmeler, Laurel A.; Gibson, Sarah E.; Dove, James; Kucera, Therese Ann
2010-01-01
We present a generalized forward code for creating simulated corona) observables off the limb from numerical and analytical MHD models. This generalized forward model is capable of creating emission maps in various wavelengths for instruments such as SXT, EIT, EIS, and coronagraphs, as well as spectropolari metric images and line profiles. The inputs to our code can be analytic models (of which four come with the code) or 2.5D and 3D numerical datacubes. We present some examples of the observable data created with our code as well as its functional capabilities. This code is currently available for beta-testing (contact authors), with the ultimate goal of release as a SolarSoft package
An efficient code for the simulation of nonhydrostatic stratified flow over obstacles
NASA Technical Reports Server (NTRS)
Pihos, G. G.; Wurtele, M. G.
1981-01-01
The physical model and computational procedure of the code is described in detail. The code is validated in tests against a variety of known analytical solutions from the literature and is also compared against actual mountain wave observations. The code will receive as initial input either mathematically idealized or discrete observational data. The form of the obstacle or mountain is arbitrary.
Auto-Coding UML Statecharts for Flight Software
NASA Technical Reports Server (NTRS)
Benowitz, Edward G; Clark, Ken; Watney, Garth J.
2006-01-01
Statecharts have been used as a means to communicate behaviors in a precise manner between system engineers and software engineers. Hand-translating a statechart to code, as done on some previous space missions, introduces the possibility of errors in the transformation from chart to code. To improve auto-coding, we have developed a process that generates flight code from UML statecharts. Our process is being used for the flight software on the Space Interferometer Mission (SIM).
Pascual-Leone, A; Yeryomenko, N; Sawashima, T; Warwar, S
2017-05-04
Pascual-Leone and Greenberg's sequential model of emotional processing has been used to explore process in over 24 studies. This line of research shows emotional processing in good psychotherapy often follows a sequential order, supporting a saw-toothed pattern of change within individual sessions (progressing "2-steps-forward, 1-step-back"). However, one cannot assume that local in-session patterns are scalable across an entire course of therapy. Thus, the primary objective of this exploratory study was to consider how the sequential patterns identified by Pascual-Leone, may apply across entire courses of treatment. Intensive emotion coding in two separate single-case designs were submitted for quantitative analyses of longitudinal patterns. Comprehensive coding in these cases involved recording observations for every emotional event in an entire course of treatment (using the Classification of Affective-Meaning States), which were then treated as a 9-point ordinal scale. Applying multilevel modeling to each of the two cases showed significant patterns of change over a large number of sessions, and those patterns were either nested at the within-session level or observed at the broader session-by-session level of change. Examining successful treatment cases showed several theoretically coherent kinds of temporal patterns, although not always in the same case. Clinical or methodological significance of this article: This is the first paper to demonstrate systematic temporal patterns of emotion over the course of an entire treatment. (1) The study offers a proof of concept that longitudinal patterns in the micro-processes of emotion can be objectively derived and quantified. (2) It also shows that patterns in emotion may be identified on the within-session level, as well as the session-by-session level of analysis. (3) Finally, observed processes over time support the ordered pattern of emotional states hypothesized in Pascual-Leone and Greenberg's ( 2007 ) model of emotional processing.
Science and Observation Recommendations for Future NASA Carbon Cycle Research
NASA Technical Reports Server (NTRS)
McClain, Charles R.; Collatz, G. J.; Kawa, S. R.; Gregg, W. W.; Gervin, J. C.; Abshire, J. B.; Andrews, A. E.; Behrenfeld, M. J.; Demaio, L. D.; Knox, R. G.
2002-01-01
Between October 2000 and June 2001, an Agency-wide planning, effort was organized by elements of NASA Goddard Space Flight Center (GSFC) to define future research and technology development activities. This planning effort was conducted at the request of the Associate Administrator of the Office of Earth Science (Code Y), Dr. Ghassem Asrar, at NASA Headquarters (HQ). The primary points of contact were Dr. Mary Cleave, Deputy Associate Administrator for Advanced Planning at NASA HQ (Headquarters) and Dr. Charles McClain of the Office of Global Carbon Studies (Code 970.2) at GSFC. During this period, GSFC hosted three workshops to define the science requirements and objectives, the observational and modeling requirements to meet the science objectives, the technology development requirements, and a cost plan for both the science program and new flight projects that will be needed for new observations beyond the present or currently planned. The plan definition process was very intensive as HQ required the final presentation package by mid-June 2001. This deadline was met and the recommendations were ultimately refined and folded into a broader program plan, which also included climate modeling, aerosol observations, and science computing technology development, for contributing to the President's Climate Change Research Initiative. This technical memorandum outlines the process and recommendations made for cross-cutting carbon cycle research as presented in June. A separate NASA document outlines the budget profiles or cost analyses conducted as part of the planning effort.
ERIC Educational Resources Information Center
Gersten, Russell
1991-01-01
The observational study of effective instructional processes in kindergarten by DeVries and others is critiqued. It is maintained that (1) the study takes a narrow approach to constructivism that does not reflect current thinking; (2) there are flaws in the coding system used; and (3) the understanding of instructional issues involving minority…
ERIC Educational Resources Information Center
McIsaac, Caroline; Connolly, Jennifer; McKenney, Katherine S.; Pepler, Debra; Craig, Wendy
2008-01-01
This study examined the association between conflict negotiation and the expression of autonomy in adolescent romantic partners. Thirty-seven couples participated in a globally coded conflict interaction task. Actor-partner interdependence models (APIM) were used to quantify the extent to which boys' and girls' autonomy was linked solely to their…
Mixing of the Interstellar and Solar Plasmas at the Heliospheric Interface
Pogorelov, N. V.; Borovikov, S. N.
2015-10-12
From the ideal MHD perspective, the heliopause is a tangential discontinuity that separates the solar wind plasma from the local interstellar medium plasma. There are physical processes, however, that make the heliopause permeable. They can be subdivided into kinetic and MHD categories. Kinetic processes occur on small length and time scales, and cannot be resolved with MHD equations. On the other hand, MHD instabilities of the heliopause have much larger scales and can be easily observed by spacecraft. The heliopause may also be a subject of magnetic reconnection. In this paper, we discuss mechanisms of plasma mixing at the heliopausemore » in the context of Voyager 1 observations. Numerical results are obtained with a Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS), which is a package of numerical codes capable of performing adaptive mesh refinement simulations of complex plasma flows in the presence of discontinuities and charge exchange between ions and neutral atoms. The flow of the ionized component is described with the ideal MHD equations, while the transport of atoms is governed either by the Boltzmann equation or multiple Euler gas dynamics equations. The code can also treat nonthermal ions and turbulence produced by them.« less
Air Traffic Controller Working Memory: Considerations in Air Traffic Control Tactical Operations
1993-09-01
INFORMATION PROCESSING SYSTEM 3 2. AIR TRAFFIC CONTROLLER MEMORY 5 2.1 MEMORY CODES 6 21.1 Visual Codes 7 2.1.2 Phonetic Codes 7 2.1.3 Semantic Codes 8...raise an awareness of the memory re- quirements of ATC tactical operations by presenting information on working memory processes that are relevant to...working v memory permeates every aspect of the controller’s ability to process air traffic information and control live traffic. The
Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert
2015-05-28
System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less
Knowledge of response location alone is not sufficient to generate social inhibition of return.
Welsh, Timothy N; Manzone, Joseph; McDougall, Laura
2014-11-01
Previous research has revealed that the inhibition of return (IOR) effect emerges when individuals respond to a target at the same location as their own previous response or the previous response of a co-actor. The latter social IOR effect is thought to occur because the observation of co-actor's response evokes a representation of that action in the observer and that the observation-evoked response code subsequently activates the inhibitory mechanisms underlying IOR. The present study was conducted to determine if knowledge of the co-actor's response alone is sufficient to evoke social IOR. Pairs of participants completed responses to targets that appeared at different button locations. Button contact generated location-contingent auditory stimuli (high and low tones in Experiment 1 and colour words in Experiment 2). In the Full condition, the observer saw the response and heard the auditory stimuli. In the Auditory Only condition, the observer did not see the co-actor's response, but heard the auditory stimuli generated via button contact to indicate response endpoint. It was found that, although significant individual and social IOR effects emerged in the Full conditions, there were no social IOR effects in the Auditory Only conditions. These findings suggest that knowledge of the co-actor's response alone via auditory information is not sufficient to activate the inhibitory processes leading to IOR. The activation of the mechanisms that lead to social IOR seems to be dependent on processing channels that code the spatial characteristics of action. Copyright © 2014 Elsevier B.V. All rights reserved.
Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan
2014-01-01
The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442
NASA Astrophysics Data System (ADS)
Dupree, N. A.; Moore, R. C.
2011-12-01
Model predictions of the ELF radio atmospheric generated by rocket-triggered lightning are compared with observations performed at Arrival Heights, Antarctica. The ability to infer source characteristics using observations at great distances may prove to greatly enhance the understanding of lightning processes that are associated with the production of transient luminous events (TLEs) as well as other ionospheric effects associated with lightning. The modeling of the sferic waveform is carried out using a modified version of the Long Wavelength Propagation Capability (LWPC) code developed by the Naval Ocean Systems Center over a period of many years. LWPC is an inherently narrowband propagation code that has been modified to predict the broadband response of the Earth-ionosphere waveguide to an impulsive lightning flash while preserving the ability of LWPC to account for an inhomogeneous waveguide. ELF observations performed at Arrival Heights, Antarctica during rocket-triggered lightning experiments at the International Center for Lightning Research and Testing (ICLRT) located at Camp Blanding, Florida are presented. The lightning current waveforms directly measured at the base of the lightning channel (at the ICLRT) are used together with LWPC to predict the sferic waveform observed at Arrival Heights under various ionospheric conditions. This paper critically compares observations with model predictions.
Parallel Demand-Withdraw Processes in Family Therapy for Adolescent Drug Abuse
Rynes, Kristina N.; Rohrbaugh, Michael J.; Lebensohn-Chialvo, Florencia; Shoham, Varda
2013-01-01
Isomorphism, or parallel process, occurs in family therapy when patterns of therapist-client interaction replicate problematic interaction patterns within the family. This study investigated parallel demand-withdraw processes in Brief Strategic Family Therapy (BSFT) for adolescent drug abuse, hypothesizing that therapist-demand/adolescent-withdraw interaction (TD/AW) cycles observed early in treatment would predict poor adolescent outcomes at follow-up for families who exhibited entrenched parent-demand/adolescent-withdraw interaction (PD/AW) before treatment began. Participants were 91 families who received at least 4 sessions of BSFT in a multi-site clinical trial on adolescent drug abuse (Robbins et al., 2011). Prior to receiving therapy, families completed videotaped family interaction tasks from which trained observers coded PD/AW. Another team of raters coded TD/AW during two early BSFT sessions. The main dependent variable was the number of drug use days that adolescents reported in Timeline Follow-Back interviews 7 to 12 months after family therapy began. Zero-inflated Poisson (ZIP) regression analyses supported the main hypothesis, showing that PD/AW and TD/AW interacted to predict adolescent drug use at follow-up. For adolescents in high PD/AW families, higher levels of TD/AW predicted significant increases in drug use at follow-up, whereas for low PD/AW families, TD/AW and follow-up drug use were unrelated. Results suggest that attending to parallel demand-withdraw processes in parent/adolescent and therapist/adolescent dyads may be useful in family therapy for substance-using adolescents. PMID:23438248
Time-resolved x-ray spectra from laser-generated high-density plasmas
NASA Astrophysics Data System (ADS)
Andiel, U.; Eidmann, Klaus; Witte, Klaus-Juergen
2001-04-01
We focused frequency doubled ultra short laser pulses on solid C, F, Na and Al targets, K-shell emission was systematically investigated by time resolved spectroscopy using a sub-ps streak camera. A large number of laser shots can be accumulated when triggering the camera with an Auston switch system at very high temporal precision. The system provides an outstanding time resolution of 1.7ps accumulating thousands of laser shots. The time duration of the He-(alpha) K-shell resonance lines was observed in the range of (2-4)ps and shows a decrease with the atomic number. The experimental results are well reproduced by hydro code simulations post processed with an atomic kinetics code.
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Bosse, Stefan
2015-02-16
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.
NASA Astrophysics Data System (ADS)
Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto
2015-08-01
We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.
Implementation of a Post-Code Pause: Extending Post-Event Debriefing to Include Silence.
Copeland, Darcy; Liska, Heather
2016-01-01
This project arose out of a need to address two issues at our hospital: we lacked a formal debriefing process for code/trauma events and the emergency department wanted to address the psychological and spiritual needs of code/trauma responders. We developed a debriefing process for code/trauma events that intentionally included mechanisms to facilitate recognition, acknowledgment, and, when needed, responses to the psychological and spiritual needs of responders. A post-code pause process was implemented in the emergency department with the aims of standardizing a debriefing process, encouraging a supportive team-based culture, improving transition back to "normal" activities after responding to code/trauma events, and providing responders an opportunity to express reverence for patients involved in code/trauma events. The post-code pause process incorporates a moment of silence and the addition of two simple questions to a traditional operational debrief. Implementation of post-code pauses was feasible despite the fast paced nature of the department. At the end of the 1-year pilot period, staff members reported increases in feeling supported by peers and leaders, their ability to pay homage to patients, and having time to regroup prior to returning to their assignment. There was a decrease in the number of respondents reporting having thoughts or feelings associated with the event within 24 hr. The pauses create a mechanism for operational team debriefing, provide an opportunity for staff members to honor their work and their patients, and support an environment in which the psychological and spiritual effects of responding to code/trauma events can be acknowledged.
Precision studies of observables in p p → W → lν _l and pp → γ ,Z → l^+ l^- processes at the LHC
NASA Astrophysics Data System (ADS)
Alioli, S.; Arbuzov, A. B.; Bardin, D. Yu.; Barzè, L.; Bernaciak, C.; Bondarenko, S. G.; Carloni Calame, C. M.; Chiesa, M.; Dittmaier, S.; Ferrera, G.; de Florian, D.; Grazzini, M.; Höche, S.; Huss, A.; Jadach, S.; Kalinovskaya, L. V.; Karlberg, A.; Krauss, F.; Li, Y.; Martinez, H.; Montagna, G.; Mück, A.; Nason, P.; Nicrosini, O.; Petriello, F.; Piccinini, F.; Płaczek, W.; Prestel, S.; Re, E.; Sapronov, A. A.; Schönherr, M.; Schwinn, C.; Vicini, A.; Wackeroth, D.; Was, Z.; Zanderighi, G.
2017-05-01
This report was prepared in the context of the LPCC Electroweak Precision Measurements at the LHC WG (https://lpcc.web.cern.ch/lpcc/index.php?page=electroweak_wg) and summarizes the activity of a subgroup dedicated to the systematic comparison of public Monte Carlo codes, which describe the Drell-Yan processes at hadron colliders, in particular at the CERN Large Hadron Collider (LHC). This work represents an important step towards the definition of an accurate simulation framework necessary for very high-precision measurements of electroweak (EW) observables such as the W boson mass and the weak mixing angle. All the codes considered in this report share at least next-to-leading-order (NLO) accuracy in the prediction of the total cross sections in an expansion either in the strong or in the EW coupling constant. The NLO fixed-order predictions have been scrutinized at the technical level, using exactly the same inputs, setup and perturbative accuracy, in order to quantify the level of agreement of different implementations of the same calculation. A dedicated comparison, again at the technical level, of three codes that reach next-to-next-to-leading-order (NNLO) accuracy in quantum chromodynamics (QCD) for the total cross section has also been performed. These fixed-order results are a well-defined reference that allows a classification of the impact of higher-order sets of radiative corrections. Several examples of higher-order effects due to the strong or the EW interaction are discussed in this common framework. Also the combination of QCD and EW corrections is discussed, together with the ambiguities that affect the final result, due to the choice of a specific combination recipe. All the codes considered in this report have been run by the respective authors, and the results presented here constitute a benchmark that should be always checked/reproduced before any high-precision analysis is conducted based on these codes. In order to simplify these benchmarking procedures, the codes used in this report, together with the relevant input files and running instructions, can be found in a repository at https://twiki.cern.ch/twiki/bin/view/Main/DrellYanComparison.
Electronic data processing codes for California wildland plants
Merton J. Reed; W. Robert Powell; Bur S. Bal
1963-01-01
Systematized codes for plant names are helpful to a wide variety of workers who must record the identity of plants in the field. We have developed such codes for a majority of the vascular plants encountered on California wildlands and have published the codes in pocket size, using photo-reductions of the output from data processing machines. A limited number of the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rest, J; Gehl, S M
1979-01-01
GRASS-SST and FASTGRASS are mechanistic computer codes for predicting fission-gas behavior in UO/sub 2/-base fuels during steady-state and transient conditions. FASTGRASS was developed in order to satisfy the need for a fast-running alternative to GRASS-SST. Althrough based on GRASS-SST, FASTGRASS is approximately an order of magnitude quicker in execution. The GRASS-SST transient analysis has evolved through comparisons of code predictions with the fission-gas release and physical phenomena that occur during reactor operation and transient direct-electrical-heating (DEH) testing of irradiated light-water reactor fuel. The FASTGRASS calculational procedure is described in this paper, along with models of key physical processes included inmore » both FASTGRASS and GRASS-SST. Predictions of fission-gas release obtained from GRASS-SST and FASTGRASS analyses are compared with experimental observations from a series of DEH tests. The major conclusions is that the computer codes should include an improved model for the evolution of the grain-edge porosity.« less
Caro, I; Stiles, W B
1997-01-01
Translating a verbal coding system from one language to another can yield unexpected insights into the process of communication in different cultures. This paper describes the problems and understandings we encountered as we translated a verbal response modes (VRM) taxonomy from English into Spanish. Standard translations of text (e.g., psychotherapeutic dialogue) systematically change the form of certain expressions, so supposedly equivalent expressions had different VRM codings in the two languages. Prominent examples of English forms whose translation had different codes in Spanish included tags, question forms, and "let's" expressions. Insofar as participants use such forms to convey nuances of their relationship, standard translations of counseling or psychotherapy sessions or other conversations may systematically misrepresent the relationship between the participants. The differences revealed in translating the VRM system point to subtle but important differences in the degrees of verbal directiveness and inclusion in English versus Spanish, which converge with other observations of differences in individualism and collectivism between Anglo and Hispanic cultures.
Oya, Eriko; Kato, Hiroaki; Chikashige, Yuji; Tsutsumi, Chihiro; Hiraoka, Yasushi; Murakami, Yota
2013-01-01
Heterochromatin at the pericentromeric repeats in fission yeast is assembled and spread by an RNAi-dependent mechanism, which is coupled with the transcription of non-coding RNA from the repeats by RNA polymerase II. In addition, Rrp6, a component of the nuclear exosome, also contributes to heterochromatin assembly and is coupled with non-coding RNA transcription. The multi-subunit complex Mediator, which directs initiation of RNA polymerase II-dependent transcription, has recently been suggested to function after initiation in processes such as elongation of transcription and splicing. However, the role of Mediator in the regulation of chromatin structure is not well understood. We investigated the role of Mediator in pericentromeric heterochromatin formation and found that deletion of specific subunits of the head domain of Mediator compromised heterochromatin structure. The Mediator head domain was required for Rrp6-dependent heterochromatin nucleation at the pericentromere and for RNAi-dependent spreading of heterochromatin into the neighboring region. In the latter process, Mediator appeared to contribute to efficient processing of siRNA from transcribed non-coding RNA, which was required for efficient spreading of heterochromatin. Furthermore, the head domain directed efficient transcription in heterochromatin. These results reveal a pivotal role for Mediator in multiple steps of transcription-coupled formation of pericentromeric heterochromatin. This observation further extends the role of Mediator to co-transcriptional chromatin regulation.
NASA Astrophysics Data System (ADS)
Jung, Seongmoon; Sung, Wonmo; Lee, Jaegi; Ye, Sung-Joon
2018-01-01
Emerging radiological applications of gold nanoparticles demand low-energy electron/photon transport calculations including details of an atomic relaxation process. Recently, MCNP® version 6.1 (MCNP6.1) has been released with extended cross-sections for low-energy electron/photon, subshell photoelectric cross-sections, and more detailed atomic relaxation data than the previous versions. With this new feature, the atomic relaxation process of MCNP6.1 has not been fully tested yet with its new physics library (eprdata12) that is based on the Evaluated Atomic Data Library (EADL). In this study, MCNP6.1 was compared with GATEv7.2, PENELOPE2014, and EGSnrc that have been often used to simulate low-energy atomic relaxation processes. The simulations were performed to acquire both photon and electron spectra produced by interactions of 15 keV electrons or photons with a 10-nm-thick gold nano-slab. The photon-induced fluorescence X-rays from MCNP6.1 fairly agreed with those from GATEv7.2 and PENELOPE2014, while the electron-induced fluorescence X-rays of the four codes showed more or less discrepancies. A coincidence was observed in the photon-induced Auger electrons simulated by MCNP6.1 and GATEv7.2. A recent release of MCNP6.1 with eprdata12 can be used to simulate the photon-induced atomic relaxation.
Cell-assembly coding in several memory processes.
Sakurai, Y
1998-01-01
The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.
Mechanism on brain information processing: Energy coding
NASA Astrophysics Data System (ADS)
Wang, Rubin; Zhang, Zhikang; Jiao, Xianfa
2006-09-01
According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, the authors present a brand new scientific theory that offers a unique mechanism for brain information processing. They demonstrate that the neural coding produced by the activity of the brain is well described by the theory of energy coding. Due to the energy coding model's ability to reveal mechanisms of brain information processing based upon known biophysical properties, they cannot only reproduce various experimental results of neuroelectrophysiology but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, they estimate that the theory has very important consequences for quantitative research of cognitive function.
Hamming and Accumulator Codes Concatenated with MPSK or QAM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel
2009-01-01
In a proposed coding-and-modulation scheme, a high-rate binary data stream would be processed as follows: 1. The input bit stream would be demultiplexed into multiple bit streams. 2. The multiple bit streams would be processed simultaneously into a high-rate outer Hamming code that would comprise multiple short constituent Hamming codes a distinct constituent Hamming code for each stream. 3. The streams would be interleaved. The interleaver would have a block structure that would facilitate parallelization for high-speed decoding. 4. The interleaved streams would be further processed simultaneously into an inner two-state, rate-1 accumulator code that would comprise multiple constituent accumulator codes - a distinct accumulator code for each stream. 5. The resulting bit streams would be mapped into symbols to be transmitted by use of a higher-order modulation - for example, M-ary phase-shift keying (MPSK) or quadrature amplitude modulation (QAM). The novelty of the scheme lies in the concatenation of the multiple-constituent Hamming and accumulator codes and the corresponding parallel architectures of the encoder and decoder circuitry (see figure) needed to process the multiple bit streams simultaneously. As in the cases of other parallel-processing schemes, one advantage of this scheme is that the overall data rate could be much greater than the data rate of each encoder and decoder stream and, hence, the encoder and decoder could handle data at an overall rate beyond the capability of the individual encoder and decoder circuits.
A Parameter Tuning Scheme of Sea-ice Model Based on Automatic Differentiation Technique
NASA Astrophysics Data System (ADS)
Kim, J. G.; Hovland, P. D.
2001-05-01
Automatic diferentiation (AD) technique was used to illustrate a new approach for parameter tuning scheme of an uncoupled sea-ice model. Atmospheric forcing field of 1992 obtained from NCEP data was used as enforcing variables in the study. The simulation results were compared with the observed ice movement provided by the International Arctic Buoy Programme (IABP). All of the numerical experiments were based on a widely used dynamic and thermodynamic model for simulating the seasonal sea-ice chnage of the main Arctic ocean. We selected five dynamic and thermodynamic parameters for the tuning process in which the cost function defined by the norm of the difference between observed and simulated ice drift locations was minimized. The selected parameters are the air and ocean drag coefficients, the ice strength constant, the turning angle at ice-air/ocean interface, and the bulk sensible heat transfer coefficient. The drag coefficients were the major parameters to control sea-ice movement and extent. The result of the study shows that more realistic simulations of ice thickness distribution was produced by tuning the simulated ice drift trajectories. In the tuning process, the L-BFCGS-B minimization algorithm of a quasi-Newton method was used. The derivative information required in the minimization iterations was provided by the AD processed Fortran code. Compared with a conventional approach, AD generated derivative code provided fast and robust computations of derivative information.
Variational estimation of process parameters in a simplified atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Lv, Guokun; Koehl, Armin; Stammer, Detlef
2016-04-01
Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.
NASA Astrophysics Data System (ADS)
Shimizu, Kenji; Ikura, Hirohiko; Ikezoe, Junpei; Nagareda, Tomofumi; Yagi, Naoto; Umetani, Keiji; Imai, Yutaka
2004-04-01
We have previously reported a synchrotron radiation (SR) microtomography system constructed at the bending magnet beamline at the SPring-8. This system has been applied to the lungs obtained at autopsy and inflated and fixed by Heitzman"s method. Normal lung and lung specimens with two different types of pathologic processes (fibrosis and emphysema) were included. Serial SR microtomographic images were stacked to yield the isotropic volumetric data with high-resolution (12 μm3 in voxel size). Within the air spaces of a subdivision of the acinus, each voxel is segmented three-dimensionally using a region growing algorithm ("rolling ball algorithm"). For each voxel within the segmented air spaces, two types of voxel coding have been performed: single-seeded (SS) coding and boundary-seeded (BS) coding, in which the minimum distance from an initial point as the only seed point and all object boundary voxels as a seed set were calculated and assigned as the code values to each voxel, respectively. With these two codes, combinations of surface rendering and volume rendering techniques were applied to visualize three-dimensional morphology of a subdivision of the acinus. Furthermore, sequentially filling process of air into a subdivision of the acinus was simulated under several conditions to visualize the ventilation procedure (air flow and diffusion). A subdivision of the acinus was reconstructed three-dimensionally, demonstrating the normal architecture of the human lung. Significant differences in appearance of ventilation procedure were observed between normal and two pathologic processes due to the alteration of the lung architecture. Three-dimensional reconstruction of the microstructure of a subdivision of the acinus and visualization of the ventilation procedure (air flow and diffusion) with SR microtomography would offer a new approach to study the morphology, physiology, and pathophysiology of the human respiratory system.
Automatic Processing of Reactive Polymers
NASA Technical Reports Server (NTRS)
Roylance, D.
1985-01-01
A series of process modeling computer codes were examined. The codes use finite element techniques to determine the time-dependent process parameters operative during nonisothermal reactive flows such as can occur in reaction injection molding or composites fabrication. The use of these analytical codes to perform experimental control functions is examined; since the models can determine the state of all variables everywhere in the system, they can be used in a manner similar to currently available experimental probes. A small but well instrumented reaction vessel in which fiber-reinforced plaques are cured using computer control and data acquisition was used. The finite element codes were also extended to treat this particular process.
Nuclear shell model code CRUNCHER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resler, D.A.; Grimes, S.M.
1988-05-01
A new nuclear shell model code CRUNCHER, patterned after the code VLADIMIR, has been developed. While CRUNCHER and VLADIMIR employ the techniques of an uncoupled basis and the Lanczos process, improvements in the new code allow it to handle much larger problems than the previous code and to perform them more efficiently. Tests involving a moderately sized calculation indicate that CRUNCHER running on a SUN 3/260 workstation requires approximately one-half the central processing unit (CPU) time required by VLADIMIR running on a CRAY-1 supercomputer.
Continuation of research into language concepts for the mission support environment: Source code
NASA Technical Reports Server (NTRS)
Barton, Timothy J.; Ratner, Jeremiah M.
1991-01-01
Research into language concepts for the Mission Control Center is presented. A computer code for source codes is presented. The file contains the routines which allow source code files to be created and compiled. The build process assumes that all elements and the COMP exist in the current directory. The build process places as much code generation as possible on the preprocessor as possible. A summary is given of the source files as used and/or manipulated by the build routine.
Meier, Benjamin Mason; De Milliano, Marlous; Chakrabarti, Averi; Kim, Yuna
2017-11-04
Employing novel coding methods to evaluate human rights monitoring, this article examines the influence of United Nations (UN) treaty bodies on national implementation of the human right to health. The advancement of the right to health in the UN human rights system has shifted over the past 20 years from the development of norms under international law to the implementation of those norms through national policy. Facilitating accountability for this rights-based policy implementation under the right to health, the UN Committee on Economic, Social and Cultural Rights (CESCR) monitors state implementation by reviewing periodic reports from state parties, engaging in formal sessions of 'constructive dialogue' with state representatives, and issuing concluding observations for state response. These concluding observations recognise the positive steps taken by states and highlight the principal areas of CESCR concern, providing recommendations for implementing human rights and detailing issues to be addressed in the next state report. Through analytic coding of the normative indicators of the right to health in both state reports and concluding observations, this article provides an empirical basis to understand the policy effects of the CESCR monitoring process on state implementation of the right to health.
NASA Astrophysics Data System (ADS)
Polito, V.; Testa, P.; De Pontieu, B.; Allred, J. C.
2017-12-01
The observation of the high temperature (above 10 MK) Fe XXI 1354.1 A line with the Interface Region Imaging Spectrograph (IRIS) has provided significant insights into the chromospheric evaporation process in flares. In particular, the line is often observed to be completely blueshifted, in contrast to previous observations at lower spatial and spectral resolution, and in agreement with predictions from theoretical models. Interestingly, the line is also observed to be mostly symmetric and with a large excess above the thermal width. One popular interpretation for the excess broadening is given by assuming a superposition of flows from different loop strands. In this work, we perform a statistical analysis of Fe XXI line profiles observed by IRIS during the impulsive phase of flares and compare our results with hydrodynamic simulations of multi-thread flare loops performed with the 1D RADYN code. Our results indicate that the multi-thread models cannot easily reproduce the symmetry of the line and that some other physical process might need to be invoked in order to explain the observed profiles.
On the Chemical Mixing Induced by Internal Gravity Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, T. M.; McElwaine, J. N.
Detailed modeling of stellar evolution requires a better understanding of the (magneto)hydrodynamic processes that mix chemical elements and transport angular momentum. Understanding these processes is crucial if we are to accurately interpret observations of chemical abundance anomalies, surface rotation measurements, and asteroseismic data. Here, we use two-dimensional hydrodynamic simulations of the generation and propagation of internal gravity waves in an intermediate-mass star to measure the chemical mixing induced by these waves. We show that such mixing can generally be treated as a diffusive process. We then show that the local diffusion coefficient does not depend on the local fluid velocity,more » but rather on the wave amplitude. We then use these findings to provide a simple parameterization for this diffusion, which can be incorporated into stellar evolution codes and tested against observations.« less
Symbol processing in the left angular gyrus: evidence from passive perception of digits.
Price, Gavin R; Ansari, Daniel
2011-08-01
Arabic digits are one of the most ubiquitous symbol sets in the world. While there have been many investigations into the neural processing of the semantic information digits represent (e.g. through numerical comparison tasks), little is known about the neural mechanisms which support the processing of digits as visual symbols. To characterise the component neurocognitive mechanisms which underlie numerical cognition, it is essential to understand the processing of digits as a visual category, independent of numerical magnitude processing. The 'Triple Code Model' (Dehaene, 1992; Dehaene and Cohen, 1995) posits an asemantic visual code for processing Arabic digits in the ventral visual stream, yet there is currently little empirical evidence in support of this code. This outstanding question was addressed in the current functional Magnetic Resonance (fMRI) study by contrasting brain responses during the passive viewing of digits versus letters and novel symbols at short (50 ms) and long (500 ms) presentation times. The results of this study reveal increased activation for familiar symbols (digits and letters) relative to unfamiliar symbols (scrambled digits and letters) at long presentation durations in the left dorsal Angular gyrus (dAG). Furthermore, increased activation for Arabic digits was observed in the left ventral Angular gyrus (vAG) in comparison to letters, scrambled digits and scrambled letters at long presentation durations, but no digit specific activation in any region at short presentation durations. These results suggest an absence of a digit specific 'Visual Number Form Area' (VNFA) in the ventral visual cortex, and provide evidence for the role of the left ventral AG during the processing of digits in the absence of any explicit processing demands. We conclude that Arabic digit processing depends specifically on the left AG rather than a ventral visual stream VNFA. Copyright © 2011 Elsevier Inc. All rights reserved.
Development of Components of Reading Skill.
ERIC Educational Resources Information Center
Curtis, Mary E.
1980-01-01
Verbal coding and listening comprehension ability differed among skilled and less skilled readers in second, third, and fifth grades. As verbal coding speed increased, comprehension skill became the more important predictor of reading skill. Apparently, verbal coding processes, which are slow, inhibit other reading processes. (Author/CP)
Moore, Brian C J
2003-03-01
To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.
SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.
Liu, T; Ding, A; Xu, X
2012-06-01
To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.
ELF Sferics Produced by Rocket-Triggered Lightning and Observed at Great Distances
NASA Astrophysics Data System (ADS)
Dupree, N. A.; Moore, R. C.; Fraser-Smith, A. C.
2013-12-01
Experimental observations of ELF radio atmospherics produced by rocket-triggered lightning flashes are used to analyze Earth-ionosphere waveguide excitation and propagation characteristics as a function of return stroke. Rocket-triggered lightning experiments are performed at the International Center for Lightning Research and Testing (ICLRT) located at Camp Blanding, Florida. Long-distance ELF observations are performed in California, Greenland, and Antarctica, although this work focuses on observations performed in Greenland. The lightning current waveforms directly measured at the base of the lightning channel (at the ICLRT) are used together with the Long Wavelength Propagation Capability (LWPC) code to predict the sferic waveform observed at the receiver locations under various ionospheric conditions. LWPC was developed by the Naval Ocean Systems Center over a period of many years. It is an inherently narrowband propagation code that has been modified to predict the broadband response of the Earth-ionosphere waveguide to an impulsive lightning flash while preserving the ability of LWPC to account for an inhomogeneous waveguide. This paper critically compares observations with model predictions, and in particular analyzes Earth-ionosphere waveguide excitation as a function of return stroke. The ability to infer source characteristics using observations at great distances may prove to greatly enhance the understanding of lightning processes that are associated with the production of transient luminous events (TLEs) as well as other ionospheric effects associated with lightning.
ERIC Educational Resources Information Center
Cookston, Jeffrey T.; Harrist, Amanda W.; Ainslie, Ricardo C.
2003-01-01
Indices of marital discord and mother-child affective processes were used to predict levels of negativity children displayed with unfamiliar peers. Thirty-nine mothers and their 5-year-olds were observed with 5-7 other mother-child dyads during a 30-minute free play session. Mother and child negativity were coded and two types of marital discord…
Microlensing observations rapid search for exoplanets: MORSE code for GPUs
NASA Astrophysics Data System (ADS)
McDougall, Alistair; Albrow, Michael D.
2016-02-01
The rapid analysis of ongoing gravitational microlensing events has been integral to the successful detection and characterization of cool planets orbiting low-mass stars in the Galaxy. In this paper, we present an implementation of search and fit techniques on graphical processing unit (GPU) hardware. The method allows for the rapid identification of candidate planetary microlensing events and their subsequent follow-up for detailed characterization.
NASA Astrophysics Data System (ADS)
Rau, G.; Hron, J.; Paladini, C.; Eriksson, K.; Aringer, B.; Groenewegen, M. A. T.; Mečina, M.
2015-08-01
We present an attempt to model the atmosphere of the carbon-rich Mira star RU Vir, using different techniques including spectroscopy, photometry, and interferometry. A radiative transfer code and hydrostatic model atmospheres were used for a preliminary study. To investigate the dynamic processes happening in RU Vir, dynamic model atmospheres were compared to new MIDI/VLTI observations obtained in April 2014, and SiC opacities were added.
Zhang, Haiyun; Sun, Dejun; Li, Defu; Zheng, Zeguang; Xu, Jingyi; Liang, Xue; Zhang, Chenting; Wang, Sheng; Wang, Jian; Lu, Wenju
2018-05-15
Long non-coding RNAs (lncRNAs) have critical regulatory roles in protein-coding gene expression. Aberrant expression profiles of lncRNAs have been observed in various human diseases. In this study, we investigated transcriptome profiles in lung tissues of chronic cigarette smoke (CS)-induced COPD mouse model. We found that 109 lncRNAs and 260 mRNAs were significantly differential expressed in lungs of chronic CS-induced COPD mouse model compared with control animals. GO and KEGG analyses indicated that differentially expressed lncRNAs associated protein-coding genes were mainly involved in protein processing of endoplasmic reticulum pathway, and taurine and hypotaurine metabolism pathway. The combination of high throughput data analysis and the results of qRT-PCR validation in lungs of chronic CS-induced COPD mouse model, 16HBE cells with CSE treatment and PBMC from patients with COPD revealed that NR_102714 and its associated protein-coding gene UCHL1 might be involved in the development of COPD both in mouse and human. In conclusion, our study demonstrated that aberrant expression profiles of lncRNAs and mRNAs existed in lungs of chronic CS-induced COPD mouse model. From animal models perspective, these results might provide further clues to investigate biological functions of lncRNAs and their potential target protein-coding genes in the pathogenesis of COPD.
Anguera, M Teresa; Portell, Mariona; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana
2018-01-01
Indirect observation is a recent concept in systematic observation. It largely involves analyzing textual material generated either indirectly from transcriptions of audio recordings of verbal behavior in natural settings (e.g., conversation, group discussions) or directly from narratives (e.g., letters of complaint, tweets, forum posts). It may also feature seemingly unobtrusive objects that can provide relevant insights into daily routines. All these materials constitute an extremely rich source of information for studying everyday life, and they are continuously growing with the burgeoning of new technologies for data recording, dissemination, and storage. Narratives are an excellent vehicle for studying everyday life, and quantitization is proposed as a means of integrating qualitative and quantitative elements. However, this analysis requires a structured system that enables researchers to analyze varying forms and sources of information objectively. In this paper, we present a methodological framework detailing the steps and decisions required to quantitatively analyze a set of data that was originally qualitative. We provide guidelines on study dimensions, text segmentation criteria, ad hoc observation instruments, data quality controls, and coding and preparation of text for quantitative analysis. The quality control stage is essential to ensure that the code matrices generated from the qualitative data are reliable. We provide examples of how an indirect observation study can produce data for quantitative analysis and also describe the different software tools available for the various stages of the process. The proposed method is framed within a specific mixed methods approach that involves collecting qualitative data and subsequently transforming these into matrices of codes (not frequencies) for quantitative analysis to detect underlying structures and behavioral patterns. The data collection and quality control procedures fully meet the requirement of flexibility and provide new perspectives on data integration in the study of biopsychosocial aspects in everyday contexts.
Normalized value coding explains dynamic adaptation in the human valuation process.
Khaw, Mel W; Glimcher, Paul W; Louie, Kenway
2017-11-28
The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.
Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code
NASA Astrophysics Data System (ADS)
Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.
2015-08-01
MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.
ERIC Educational Resources Information Center
Hickok, Gregory
2012-01-01
Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…
System theory in industrial patient monitoring: an overview.
Baura, G D
2004-01-01
Patient monitoring refers to the continuous observation of repeating events of physiologic function to guide therapy or to monitor the effectiveness of interventions, and is used primarily in the intensive care unit and operating room. Commonly processed signals are the electrocardiogram, intraarterial blood pressure, arterial saturation of oxygen, and cardiac output. To this day, the majority of physiologic waveform processing in patient monitors is conducted using heuristic curve fitting. However in the early 1990s, a few enterprising engineers and physicians began using system theory to improve their core processing. Applications included improvement of signal-to-noise ratio, either due to low signal levels or motion artifact, and improvement in feature detection. The goal of this mini-symposium is to review the early work in this emerging field, which has led to technologic breakthroughs. In this overview talk, the process of system theory algorithm research and development is discussed. Research for industrial monitors involves substantial data collection, with some data used for algorithm training and the remainder used for validation. Once the algorithms are validated, they are translated into detailed specifications. Development then translates these specifications into DSP code. The DSP code is verified and validated per the Good Manufacturing Practices mandated by FDA.
Multi-GNSS precise point positioning (MGPPP) using raw observations
NASA Astrophysics Data System (ADS)
Liu, Teng; Yuan, Yunbin; Zhang, Baocheng; Wang, Ningbo; Tan, Bingfeng; Chen, Yongchang
2017-03-01
A joint-processing model for multi-GNSS (GPS, GLONASS, BDS and GALILEO) precise point positioning (PPP) is proposed, in which raw code and phase observations are used. In the proposed model, inter-system biases (ISBs) and GLONASS code inter-frequency biases (IFBs) are carefully considered, among which GLONASS code IFBs are modeled as a linear function of frequency numbers. To get the full rank function model, the unknowns are re-parameterized and the estimable slant ionospheric delays and ISBs/IFBs are derived and estimated simultaneously. One month of data in April, 2015 from 32 stations of the International GNSS Service (IGS) Multi-GNSS Experiment (MGEX) tracking network have been used to validate the proposed model. Preliminary results show that RMS values of the positioning errors (with respect to external double-difference solutions) for static/kinematic solutions (four systems) are 6.2 mm/2.1 cm (north), 6.0 mm/2.2 cm (east) and 9.3 mm/4.9 cm (up). One-day stabilities of the estimated ISBs described by STD values are 0.36 and 0.38 ns, for GLONASS and BDS, respectively. Significant ISB jumps are identified between adjacent days for all stations, which are caused by the different satellite clock datums in different days and for different systems. Unlike ISBs, the estimated GLONASS code IFBs are quite stable for all stations, with an average STD of 0.04 ns over a month. Single-difference experiment of short baseline shows that PPP ionospheric delays are more precise than traditional leveling ionospheric delays.
Color-coded Live Imaging of Heterokaryon Formation and Nuclear Fusion of Hybridizing Cancer Cells.
Suetsugu, Atsushi; Matsumoto, Takuro; Hasegawa, Kosuke; Nakamura, Miki; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M
2016-08-01
Fusion of cancer cells has been studied for over half a century. However, the steps involved after initial fusion between cells, such as heterokaryon formation and nuclear fusion, have been difficult to observe in real time. In order to be able to visualize these steps, we have established cancer-cell sublines from the human HT-1080 fibrosarcoma, one expressing green fluorescent protein (GFP) linked to histone H2B in the nucleus and a red fluorescent protein (RFP) in the cytoplasm and the other subline expressing RFP in the nucleus (mCherry) linked to histone H2B and GFP in the cytoplasm. The two reciprocal color-coded sublines of HT-1080 cells were fused using the Sendai virus. The fused cells were cultured on plastic and observed using an Olympus FV1000 confocal microscope. Multi-nucleate (heterokaryotic) cancer cells, in addition to hybrid cancer cells with single-or multiple-fused nuclei, including fused mitotic nuclei, were observed among the fused cells. Heterokaryons with red, green, orange and yellow nuclei were observed by confocal imaging, even in single hybrid cells. The orange and yellow nuclei indicate nuclear fusion. Red and green nuclei remained unfused. Cell fusion with heterokaryon formation and subsequent nuclear fusion resulting in hybridization may be an important natural phenomenon between cancer cells that may make them more malignant. The ability to image the complex processes following cell fusion using reciprocal color-coded cancer cells will allow greater understanding of the genetic basis of malignancy. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Coding coarse grained polymer model for LAMMPS and its application to polymer crystallization
NASA Astrophysics Data System (ADS)
Luo, Chuanfu; Sommer, Jens-Uwe
2009-08-01
We present a patch code for LAMMPS to implement a coarse grained (CG) model of poly(vinyl alcohol) (PVA). LAMMPS is a powerful molecular dynamics (MD) simulator developed at Sandia National Laboratories. Our patch code implements tabulated angular potential and Lennard-Jones-9-6 (LJ96) style interaction for PVA. Benefited from the excellent parallel efficiency of LAMMPS, our patch code is suitable for large-scale simulations. This CG-PVA code is used to study polymer crystallization, which is a long-standing unsolved problem in polymer physics. By using parallel computing, cooling and heating processes for long chains are simulated. The results show that chain-folded structures resembling the lamellae of polymer crystals are formed during the cooling process. The evolution of the static structure factor during the crystallization transition indicates that long-range density order appears before local crystalline packing. This is consistent with some experimental observations by small/wide angle X-ray scattering (SAXS/WAXS). During the heating process, it is found that the crystalline regions are still growing until they are fully melted, which can be confirmed by the evolution both of the static structure factor and average stem length formed by the chains. This two-stage behavior indicates that melting of polymer crystals is far from thermodynamic equilibrium. Our results concur with various experiments. It is the first time that such growth/reorganization behavior is clearly observed by MD simulations. Our code can be easily used to model other type of polymers by providing a file containing the tabulated angle potential data and a set of appropriate parameters. Program summaryProgram title: lammps-cgpva Catalogue identifier: AEDE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU's GPL No. of lines in distributed program, including test data, etc.: 940 798 No. of bytes in distributed program, including test data, etc.: 12 536 245 Distribution format: tar.gz Programming language: C++/MPI Computer: Tested on Intel-x86 and AMD64 architectures. Should run on any architecture providing a C++ compiler Operating system: Tested under Linux. Any other OS with C++ compiler and MPI library should suffice Has the code been vectorized or parallelized?: Yes RAM: Depends on system size and how many CPUs are used Classification: 7.7 External routines: LAMMPS ( http://lammps.sandia.gov/), FFTW ( http://www.fftw.org/) Nature of problem: Implementing special tabular angle potentials and Lennard-Jones-9-6 style interactions of a coarse grained polymer model for LAMMPS code. Solution method: Cubic spline interpolation of input tabulated angle potential data. Restrictions: The code is based on a former version of LAMMPS. Unusual features.: Any special angular potential can be used if it can be tabulated. Running time: Seconds to weeks, depending on system size, speed of CPU and how many CPUs are used. The test run provided with the package takes about 5 minutes on 4 AMD's opteron (2.6 GHz) CPUs. References:D. Reith, H. Meyer, F. Müller-Plathe, Macromolecules 34 (2001) 2335-2345. H. Meyer, F. Müller-Plathe, J. Chem. Phys. 115 (2001) 7807. H. Meyer, F. Müller-Plathe, Macromolecules 35 (2002) 1241-1252.
Testing Photoionization Calculations Using Chandra X-ray Spectra
NASA Technical Reports Server (NTRS)
Kallman, Tim
2008-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Advanced LIGO constraints on neutron star mergers and r-process sites
Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.; ...
2017-02-20
The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less
Cloudy - simulating the non-equilibrium microphysics of gas and dust, and its observed spectrum
NASA Astrophysics Data System (ADS)
Ferland, Gary J.
2014-01-01
Cloudy is an open-source plasma/spectral simulation code, last described in the open-access journal Revista Mexicana (Ferland et al. 2013, 2013RMxAA..49..137F). The project goal is a complete simulation of the microphysics of gas and dust over the full range of density, temperature, and ionization that we encounter in astrophysics, together with a prediction of the observed spectrum. Cloudy is one of the more widely used theory codes in astrophysics with roughly 200 papers citing its documentation each year. It is developed by graduate students, postdocs, and an international network of collaborators. Cloudy is freely available on the web at trac.nublado.org, the user community can post questions on http://groups.yahoo.com/neo/groups/cloudy_simulations/info, and summer schools are organized to learn more about Cloudy and its use (http://cloud9.pa.uky.edu gary/cloudy/CloudySummerSchool/). The code’s widespread use is possible because of extensive automatic testing. It is exercised over its full range of applicability whenever the source is changed. Changes in predicted quantities are automatically detected along with any newly introduced problems. The code is designed to be autonomous and self-aware. It generates a report at the end of a calculation that summarizes any problems encountered along with suggestions of potentially incorrect boundary conditions. This self-monitoring is a core feature since the code is now often used to generate large MPI grids of simulations, making it impossible for a user to verify each calculation by hand. I will describe some challenges in developing a large physics code, with its many interconnected physical processes, many at the frontier of research in atomic or molecular physics, all in an open environment.
NASA Astrophysics Data System (ADS)
Reese, Keturah
Under the direction of Sharon Murphy Augustine, Ph.D./Ph.D Curriculum and Instruction There was a substantial performance gap among African Americans and other ethnic groups. Additionally, African American students in a Title I school were at a significantly high risk of not meeting or exceeding on performance tests in science. Past reports have shown average gains in some subject areas, and declines in others (NCES, 2011; GADOE, 2012). Current instructional strategies and the lack of literacy within the biology classroom created a problem for African American high school students on national and state assessments. The purpose of this study was to examine the perceptions of African American students and teachers in the context of literacy and biology through the incorporation of an interactive notebook and other literacy strategies. The data was collected three ways: field notes for a two week observation period within the biology classroom, student and teacher interviews, and student work samples. During the observations, student work collection, and interviews, I looked for the following codes: active learning, constructive learning, collaborative learning, authentic learning, and intentional learning. In the process of coding for the pre-determined codes, three more codes emerged. The three codes that emerged were organization, studying/student ownership, and student teacher relationships. Students and teachers both solidified the notion that literacy and biology worked well together. The implemented literacy strategies were something that both teachers and students appreciated in their learning of biology. Overall students and teachers perceived that the interactive notebook along Cornell notes, Thinking maps, close reads, writing, lab experiments, and group work created meaningful learning experiences within the biology classroom.
Eye coding mechanisms in early human face event-related potentials.
Rousselet, Guillaume A; Ince, Robin A A; van Rijsbergen, Nicola J; Schyns, Philippe G
2014-11-10
In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye. © 2014 ARVO.
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1990-01-01
Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.
Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-27
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.
Software quality and process improvement in scientific simulation codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ambrosiano, J.; Webster, R.
1997-11-01
This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... for Residential Construction in High Wind Regions. ICC 700: National Green Building Standard The..., coordinated, and necessary to regulate the built environment. Federal agencies frequently use these codes and... International Codes and Standards consist of the following: ICC Codes International Building Code. International...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-16
... for Residential Construction in High Wind Areas. ICC 700: National Green Building Standard. The... Codes and Standards that are comprehensive, coordinated, and necessary to regulate the built environment... International Codes and Standards consist of the following: ICC Codes International Building Code. International...
[Permanent disability and the insurance estimation process].
Soumah, M M; Mbaye, I; Ndiaye, M; Bah, H; Gaye Fall, M C; Sow, M L
2006-01-01
The casualties are indemnified according to two processes. First by transaction on rate proposition of insurance physicians, and the second process on rate proposition by a medical expert assigned by law-court. Indemnification scale failure justifies the Interafrican Conference of insurance Markets code adoption. Six insurance societies and the Automotive Guarantee Fund were debtors. Only 627 victims had been indemnified between 1986 and 2003. Expert valuations done at forensic medicine service were the support of the investigation. Inquired parameters were insurance societies, regulation type, aftermaths and the retained prejudices. The data collected on computer card have been analyzed by software Epi Info. The partial permanent inabilities fixed since its adoption differ to inabilities fixed before this adoption. Transaction process concerned 567 victims (90.4%). Sixty victims were indemnified by judicial way. According to process type, the rates fixed in judicial process were 61.6% middle permanent partial inabilities. After 1997, there have observed a decrease in the high and middle permanent partial inabilities in the two processes. The appreciation of the pretium doloris is more subjective but must repair the aftermaths. The middle pretium was majority in the two processes, before and after 1997 with a high decrease of the middle pretium in the transaction process (-15.07) and a small pretium increase of 10.98 points. A common scale code has decreased the judicial litigation concerning casualties in spite of scales' limits. Only the patients with important aftermaths arrive in the judicial process since 1997.
Standardized Radiation Shield Design Methods: 2005 HZETRN
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.
2006-01-01
Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.
Application of grammar-based codes for lossless compression of digital mammograms
NASA Astrophysics Data System (ADS)
Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah
2006-01-01
A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.
Enrichment of Circular Code Motifs in the Genes of the Yeast Saccharomyces cerevisiae.
Michel, Christian J; Ngoune, Viviane Nguefack; Poch, Olivier; Ripp, Raymond; Thompson, Julie D
2017-12-03
A set X of 20 trinucleotides has been found to have the highest average occurrence in the reading frame, compared to the two shifted frames, of genes of bacteria, archaea, eukaryotes, plasmids and viruses. This set X has an interesting mathematical property, since X is a maximal C3 self-complementary trinucleotide circular code. Furthermore, any motif obtained from this circular code X has the capacity to retrieve, maintain and synchronize the original (reading) frame. Since 1996, the theory of circular codes in genes has mainly been developed by analysing the properties of the 20 trinucleotides of X, using combinatorics and statistical approaches. For the first time, we test this theory by analysing the X motifs, i.e., motifs from the circular code X, in the complete genome of the yeast Saccharomyces cerevisiae . Several properties of X motifs are identified by basic statistics (at the frequency level), and evaluated by comparison to R motifs, i.e., random motifs generated from 30 different random codes R. We first show that the frequency of X motifs is significantly greater than that of R motifs in the genome of S. cerevisiae . We then verify that no significant difference is observed between the frequencies of X and R motifs in the non-coding regions of S. cerevisiae , but that the occurrence number of X motifs is significantly higher than R motifs in the genes (protein-coding regions). This property is true for all cardinalities of X motifs (from 4 to 20) and for all 16 chromosomes. We further investigate the distribution of X motifs in the three frames of S. cerevisiae genes and show that they occur more frequently in the reading frame, regardless of their cardinality or their length. Finally, the ratio of X genes, i.e., genes with at least one X motif, to non-X genes, in the set of verified genes is significantly different to that observed in the set of putative or dubious genes with no experimental evidence. These results, taken together, represent the first evidence for a significant enrichment of X motifs in the genes of an extant organism. They raise two hypotheses: the X motifs may be evolutionary relics of the primitive codes used for translation, or they may continue to play a functional role in the complex processes of genome decoding and protein synthesis.
NASA Astrophysics Data System (ADS)
Ness, P. H.; Jacobson, H.
1984-10-01
The thrust of 'group technology' is toward the exploitation of similarities in component design and manufacturing process plans to achieve assembly line flow cost efficiencies for small batch production. The systematic method devised for the identification of similarities in component geometry and processing steps is a coding and classification scheme implemented by interactive CAD/CAM systems. This coding and classification scheme has led to significant increases in computer processing power, allowing rapid searches and retrievals on the basis of a 30-digit code together with user-friendly computer graphics.
Design applications for supercomputers
NASA Technical Reports Server (NTRS)
Studerus, C. J.
1987-01-01
The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.
The impact of nuclear mass models on r-process nucleosynthesis network calculations
NASA Astrophysics Data System (ADS)
Vaughan, Kelly
2002-10-01
An insight into understanding various nucleosynthesis processes is via modelling of the process with network calculations. My project focus is r-process network calculations where the r-process is nucleosynthesis via rapid neutron capture thought to take place in high entropy supernova bubbles. One of the main uncertainties of the simulations is the Nuclear Physics input. My project investigates the role that nuclear masses play in the resulting abundances. The code tecode, involves rapid (n,γ) capture reactions in competition with photodisintegration and β decay onto seed nuclei. In order to fully analyze the effects of nuclear mass models on the relative isotopic abundances, calculations were done from the network code, keeping the initial environmental parameters constant throughout. The supernova model investigated by Qian et al (1996) in which two r-processes, of high and low frequency with seed nucleus ^90Se and of fixed luminosity (fracL_ν_e(0)r_7(0)^2 ˜= 8.77), contribute to the nucleosynthesis of the heavier elements. These two r-processes, however, do not contribute equally to the total abundance observed. The total isotopic abundance produced from both events was therefore calculated using equation refabund. Y(H+L) = fracY(H)+fY(L)f+1 <~belabund where Y(H) denotes the relative isotopic abundance produced in the high frequency event, Y(L) corresponds to the low freqeuncy event and f is the ratio of high event matter to low event matter produced. Having established reliable, fixed parameters, the network code was run using data files containing parameters such as the mass excess, neutron separation energy, β decay rates and neutron capture rates based around three different nuclear mass models. The mass models tested are the HFBCS model (Hartree-Fock BCS) derived from first principles, the ETFSI-Q model (Extended Thomas-Fermi with Strutinsky Integral including shell Quenching) known for its particular successes in the replication of Solar System abundances, and the P-Scheme Model tePscheme. The aims of this research is to test the applicability of the P-Scheme in relation to the other mass models to the r-process network calculations. 02 Pscheme Aprahamian,A., Gadala-Maria,A. & Cuka,N. 1996, Revista Mexicana de Fisica,42,1 code Surman,R. & Engel,J. 1998, Phys.Rev. C,54,4 thebibliography
Automatic Synthesis of UML Designs from Requirements in an Iterative Process
NASA Technical Reports Server (NTRS)
Schumann, Johann; Whittle, Jon; Clancy, Daniel (Technical Monitor)
2001-01-01
The Unified Modeling Language (UML) is gaining wide popularity for the design of object-oriented systems. UML combines various object-oriented graphical design notations under one common framework. A major factor for the broad acceptance of UML is that it can be conveniently used in a highly iterative, Use Case (or scenario-based) process (although the process is not a part of UML). Here, the (pre-) requirements for the software are specified rather informally as Use Cases and a set of scenarios. A scenario can be seen as an individual trace of a software artifact. Besides first sketches of a class diagram to illustrate the static system breakdown, scenarios are a favorite way of communication with the customer, because scenarios describe concrete interactions between entities and are thus easy to understand. Scenarios with a high level of detail are often expressed as sequence diagrams. Later in the design and implementation stage (elaboration and implementation phases), a design of the system's behavior is often developed as a set of statecharts. From there (and the full-fledged class diagram), actual code development is started. Current commercial UML tools support this phase by providing code generators for class diagrams and statecharts. In practice, it can be observed that the transition from requirements to design to code is a highly iterative process. In this talk, a set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts). More specifically, we will discuss the following transformations: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".
Auditory spatial processing in the human cortex.
Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C
2012-12-01
The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.
Dissociation between awareness and spatial coding: evidence from unilateral neglect.
Treccani, Barbara; Cubelli, Roberto; Sellaro, Roberta; Umiltà, Carlo; Della Sala, Sergio
2012-04-01
Prevalent theories about consciousness propose a causal relation between lack of spatial coding and absence of conscious experience: The failure to code the position of an object is assumed to prevent this object from entering consciousness. This is consistent with influential theories of unilateral neglect following brain damage, according to which spatial coding of neglected stimuli is defective, and this would keep their processing at the nonconscious level. Contrary to this view, we report evidence showing that spatial coding and consciousness can dissociate. A patient with left neglect, who was not aware of contralesional stimuli, was able to process their color and position. However, in contrast to (ipsilesional) consciously perceived stimuli, color and position of neglected stimuli were processed separately. We propose that individual object features, including position, can be processed without attention and consciousness and that conscious perception of an object depends on the binding of its features into an integrated percept.
Do humans make good decisions?
Summerfield, Christopher; Tsetsos, Konstantinos
2014-01-01
Human performance on perceptual classification tasks approaches that of an ideal observer, but economic decisions are often inconsistent and intransitive, with preferences reversing according to the local context. We discuss the view that suboptimal choices may result from the efficient coding of decision-relevant information, a strategy that allows expected inputs to be processed with higher gain than unexpected inputs. Efficient coding leads to ‘robust’ decisions that depart from optimality but maximise the information transmitted by a limited-capacity system in a rapidly-changing world. We review recent work showing that when perceptual environments are variable or volatile, perceptual decisions exhibit the same suboptimal context-dependence as economic choices, and propose a general computational framework that accounts for findings across the two domains. PMID:25488076
On the validation of a code and a turbulence model appropriate to circulation control airfoils
NASA Technical Reports Server (NTRS)
Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.
1988-01-01
A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.
Dunn, Madeleine J; Rodriguez, Erin M; Miller, Kimberly S; Gerhardt, Cynthia A; Vannatta, Kathryn; Saylor, Megan; Scheule, C Melanie; Compas, Bruce E
2011-06-01
To examine the acceptability and feasibility of coding observed verbal and nonverbal behavioral and emotional components of mother-child communication among families of children with cancer. Mother-child dyads (N=33, children ages 5-17 years) were asked to engage in a videotaped 15-min conversation about the child's cancer. Coding was done using the Iowa Family Interaction Rating Scale (IFIRS). Acceptability and feasibility of direct observation in this population were partially supported: 58% consented and 81% of those (47% of all eligible dyads) completed the task; trained raters achieved 78% agreement in ratings across codes. The construct validity of the IFIRS was demonstrated by expected associations within and between positive and negative behavioral/emotional code ratings and between mothers' and children's corresponding code ratings. Direct observation of mother-child communication about childhood cancer has the potential to be an acceptable and feasible method of assessing verbal and nonverbal behavior and emotion in this population.
Dyadic Dynamics in Young Couples Reporting Dating Violence: An Actor-Partner Interdependence Model.
Paradis, Alison; Hébert, Martine; Fernet, Mylène
2017-01-01
This study uses a combination of observational methods and dyadic data analysis to understand how boyfriends' and girlfriends' perpetration of dating violence (DV) may shape their own and their partners' problem-solving communication behaviors. A sample of 39 young heterosexual couples aged between 15 and 20 years (mean age = 17.8 years) completed a set of questionnaires and were observed during a 45-min dyadic interaction, which was coded using the Interactional Dimension Coding System (IDCS). Results suggest that neither boyfriends' nor girlfriends' own perpetration of DV was related to their display of positive and negative communication behaviors. However, estimates revealed significant partner effects, suggesting that negative communication behaviors displayed by girls and boys and positive communication behavior displayed by girls were associated to their partner's DV but not to their own. Such results confirm the need to shift our focus from an individual perspective to examining dyadic influences and processes involved in the couple system and the bidirectionality of violent relationships. © The Author(s) 2015.
Dyadic Dynamics in Young Couples Reporting Dating Violence: An Actor-Partner interdependence model
Paradis, Alison; Hébert, Martine; Fernet, Mylène
2016-01-01
This study uses a combination of observational methods and dyadic data analysis to understand how boyfriends’ and girlfriends’ perpetration of dating violence may shape their own and their partners’ problem-solving communication behaviors. A sample of 39 young heterosexual couples aged between 15 and 20 years (mean age 17.8 years) completed a set of questionnaires and were observed during a 45 minute dyadic interaction, which was coded using the Interactional Dimension Coding System (IDCS). Results suggest that, neither boyfriends nor girlfriends own perpetration of dating violence was related to their display of positive and negative communication behaviors. However, estimates revealed significant partner effects, suggesting that negative communication behaviors displayed by girls and boys and positive communication behavior displayed by girls were associated to their partner’s dating violence but not to their own. Such results confirm the need to shift our focus from an individual perspective to examining dyadic influences and processes involved in the couple system and the bi-directionality of violent relationships. PMID:25969443
NASA Astrophysics Data System (ADS)
Stratakis, D.; Kishek, R. A.; Li, H.; Bernal, S.; Walter, M.; Tobin, J.; Quinn, B.; Reiser, M.; O'Shea, P. G.
2006-11-01
Tomography is the technique of reconstructing an image from its projections. It is widely used in the medical community to observe the interior of the human body by processing multiple x-ray images taken at different angles, A few pioneering researchers have adapted tomography to reconstruct detailed phase space maps of charged particle beams. Some questions arise regarding the limitations of tomography technique for space charge dominated beams. For instance is the linear space charge force a valid approximation? Does tomography equally reproduce phase space for complex, experimentally observed, initial particle distributions? Does tomography make any assumptions about the initial distribution? This study explores the use of accurate modeling with the particle-in-cell code WARP to address these questions, using a wide range of different initial distributions in the code. The study also includes a number of experimental results on tomographic phase space mapping performed on the University of Maryland Electron Ring (UMER).
Real-time Automatic Search for Multi-wavelength Counterparts of DWF Transients
NASA Astrophysics Data System (ADS)
Murphy, Christopher; Cucchiara, Antonino; Andreoni, Igor; Cooke, Jeff; Hegarty, Sarah
2018-01-01
The Deeper Wider Faster (DWF) survey aims to find and classify the fastest transients in the Universe. DWF utilizes the Dark Energy Camera (DECam), collecting a continuous sequence of 20s images over a 3 square degree field of view.Once an interesting transient is detected during DWF observations, the DWF collaboration has access to several facilities for rapid follow-up in multiple wavelengths (from gamma to radio).An online web tool has been designed to help with real-time visual classification of possible astrophysical transients in data collected by the DWF observing program. The goal of this project is to create a python-based code to improve the classification process by querying several existing archive databases. Given the DWF transient location and search radius, the developed code will extract a list of possible counterparts and all available information (e.g. magnitude, radio fluxes, distance separation).Thanks to this tool, the human classifier can make a quicker decision in order to trigger the collaboration rapid-response resources.
Processes of code status transitions in hospitalized patients with advanced cancer.
El-Jawahri, Areej; Lau-Min, Kelsey; Nipp, Ryan D; Greer, Joseph A; Traeger, Lara N; Moran, Samantha M; D'Arpino, Sara M; Hochberg, Ephraim P; Jackson, Vicki A; Cashavelly, Barbara J; Martinson, Holly S; Ryan, David P; Temel, Jennifer S
2017-12-15
Although hospitalized patients with advanced cancer have a low chance of surviving cardiopulmonary resuscitation (CPR), the processes by which they change their code status from full code to do not resuscitate (DNR) are unknown. We conducted a mixed-methods study on a prospective cohort of hospitalized patients with advanced cancer. Two physicians used a consensus-driven medical record review to characterize processes that led to code status order transitions from full code to DNR. In total, 1047 hospitalizations were reviewed among 728 patients. Admitting clinicians did not address code status in 53% of hospitalizations, resulting in code status orders of "presumed full." In total, 275 patients (26.3%) transitioned from full code to DNR, and 48.7% (134 of 275 patients) of those had an order of "presumed full" at admission; however, upon further clarification, the patients expressed that they had wished to be DNR before the hospitalization. We identified 3 additional processes leading to order transition from full code to DNR acute clinical deterioration (15.3%), discontinuation of cancer-directed therapy (17.1%), and education about the potential harms/futility of CPR (15.3%). Compared with discontinuing therapy and education, transitions because of acute clinical deterioration were associated with less patient involvement (P = .002), a shorter time to death (P < .001), and a greater likelihood of inpatient death (P = .005). One-half of code status order changes among hospitalized patients with advanced cancer were because of full code orders in patients who had a preference for DNR before hospitalization. Transitions due of acute clinical deterioration were associated with less patient engagement and a higher likelihood of inpatient death. Cancer 2017;123:4895-902. © 2017 American Cancer Society. © 2017 American Cancer Society.
Auto Code Generation for Simulink-Based Attitude Determination Control System
NASA Technical Reports Server (NTRS)
MolinaFraticelli, Jose Carlos
2012-01-01
This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.
Vahaba, Daniel M; Macedo-Lima, Matheus; Remage-Healey, Luke
2017-01-01
Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor's song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM's established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E 2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches ( Taeniopygia guttata ) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E 2 administration on sensory processing. In sensory-aged subjects, E 2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E 2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E 2 sensitivity that each precisely track a key neural "switch point" from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds.
2017-01-01
Abstract Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor’s song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM’s established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches (Taeniopygia guttata) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E2 administration on sensory processing. In sensory-aged subjects, E2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E2 sensitivity that each precisely track a key neural “switch point” from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds. PMID:29255797
Evidence and diagnostic reporting in the IHE context.
Loef, Cor; Truyen, Roel
2005-05-01
Capturing clinical observations and findings during the diagnostic imaging process is increasingly becoming a critical step in diagnostic reporting. Standards developers-notably HL7 and DICOM-are making significant progress toward standards that enable exchanging clinical observations and findings among the various information systems of the healthcare enterprise. DICOM-like the HL7 Clinical Document Architecture (CDA) -uses templates and constrained, coded vocabulary (SNOMED, LOINC, etc.). Such a representation facilitates automated software recognition of findings and observations, intrapatient comparison, correlation to norms, and outcomes research. The scope of DICOM Structured Reporting (SR) includes many findings that products routinely create in digital form (measurements, computed estimates, etc.). In the Integrating the Healthcare Enterprise (IHE) framework, two Integration Profiles are defined for clinical data capture and diagnostic reporting: Evidence Document, and Simple Image and Numeric Report. This report describes these two DICOM SR-based integration profiles in the diagnostic reporting process.
Comparison between observed and calculated distributions of trace species in the middle atmosphere
NASA Technical Reports Server (NTRS)
Brasseur, G.; Derudder, A.
1989-01-01
The purpose is to identify major discrepancies between empirical models and theoretical models and to stress the need for additional observations in the atmosphere and for further laboratory work, since these differences suggest either problems associated with observation techniques or errors in chemical kinetics data (or the existence of unknown processes which appear to play an important role). The model used for this investigation extends from the earth's surface to the lower thermosphere. It includes the important chemical and photochemical processes related to the oxygen, hydrogen, carbon, nitrogen and chlorine families. The chemical code is coupled with a radiative scheme which provides the heating rate due to absorption of solar radiation by ozone and the cooling rate due to the emission and absorption of terrestrial radiation by CO2, H2O and O3. The vertical transport of the species is expressed by an eddy diffusion parameterization.
P-Code-Enhanced Encryption-Mode Processing of GPS Signals
NASA Technical Reports Server (NTRS)
Young, Lawrence; Meehan, Thomas; Thomas, Jess B.
2003-01-01
A method of processing signals in a Global Positioning System (GPS) receiver has been invented to enable the receiver to recover some of the information that is otherwise lost when GPS signals are encrypted at the transmitters. The need for this method arises because, at the option of the military, precision GPS code (P-code) is sometimes encrypted by a secret binary code, denoted the A code. Authorized users can recover the full signal with knowledge of the A-code. However, even in the absence of knowledge of the A-code, one can track the encrypted signal by use of an estimate of the A-code. The present invention is a method of making and using such an estimate. In comparison with prior such methods, this method makes it possible to recover more of the lost information and obtain greater accuracy.
NASA Astrophysics Data System (ADS)
Hempelmann, Nils; Ehbrecht, Carsten; Alvarez-Castro, Carmen; Brockmann, Patrick; Falk, Wolfgang; Hoffmann, Jörg; Kindermann, Stephan; Koziol, Ben; Nangini, Cathy; Radanovics, Sabine; Vautard, Robert; Yiou, Pascal
2018-01-01
Analyses of extreme weather events and their impacts often requires big data processing of ensembles of climate model simulations. Researchers generally proceed by downloading the data from the providers and processing the data files ;at home; with their own analysis processes. However, the growing amount of available climate model and observation data makes this procedure quite awkward. In addition, data processing knowledge is kept local, instead of being consolidated into a common resource of reusable code. These drawbacks can be mitigated by using a web processing service (WPS). A WPS hosts services such as data analysis processes that are accessible over the web, and can be installed close to the data archives. We developed a WPS named 'flyingpigeon' that communicates over an HTTP network protocol based on standards defined by the Open Geospatial Consortium (OGC), to be used by climatologists and impact modelers as a tool for analyzing large datasets remotely. Here, we present the current processes we developed in flyingpigeon relating to commonly-used processes (preprocessing steps, spatial subsets at continent, country or region level, and climate indices) as well as methods for specific climate data analysis (weather regimes, analogues of circulation, segetal flora distribution, and species distribution models). We also developed a novel, browser-based interactive data visualization for circulation analogues, illustrating the flexibility of WPS in designing custom outputs. Bringing the software to the data instead of transferring the data to the code is becoming increasingly necessary, especially with the upcoming massive climate datasets.
Zhong, Qiu-Yue; Karlson, Elizabeth W; Gelaye, Bizu; Finan, Sean; Avillach, Paul; Smoller, Jordan W; Cai, Tianxi; Williams, Michelle A
2018-05-29
We examined the comparative performance of structured, diagnostic codes vs. natural language processing (NLP) of unstructured text for screening suicidal behavior among pregnant women in electronic medical records (EMRs). Women aged 10-64 years with at least one diagnostic code related to pregnancy or delivery (N = 275,843) from Partners HealthCare were included as our "datamart." Diagnostic codes related to suicidal behavior were applied to the datamart to screen women for suicidal behavior. Among women without any diagnostic codes related to suicidal behavior (n = 273,410), 5880 women were randomly sampled, of whom 1120 had at least one mention of terms related to suicidal behavior in clinical notes. NLP was then used to process clinical notes for the 1120 women. Chart reviews were performed for subsamples of women. Using diagnostic codes, 196 pregnant women were screened positive for suicidal behavior, among whom 149 (76%) had confirmed suicidal behavior by chart review. Using NLP among those without diagnostic codes, 486 pregnant women were screened positive for suicidal behavior, among whom 146 (30%) had confirmed suicidal behavior by chart review. The use of NLP substantially improves the sensitivity of screening suicidal behavior in EMRs. However, the prevalence of confirmed suicidal behavior was lower among women who did not have diagnostic codes for suicidal behavior but screened positive by NLP. NLP should be used together with diagnostic codes for future EMR-based phenotyping studies for suicidal behavior.
Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August
2018-07-01
Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
The generation of meaningful information in molecular systems.
Wills, Peter R
2016-03-13
The physico-chemical processes occurring inside cells are under the computational control of genetic (DNA) and epigenetic (internal structural) programming. The origin and evolution of genetic information (nucleic acid sequences) is reasonably well understood, but scant attention has been paid to the origin and evolution of the molecular biological interpreters that give phenotypic meaning to the sequence information that is quite faithfully replicated during cellular reproduction. The near universality and age of the mapping from nucleotide triplets to amino acids embedded in the functionality of the protein synthetic machinery speaks to the early development of a system of coding which is still extant in every living organism. We take the origin of genetic coding as a paradigm of the emergence of computation in natural systems, focusing on the requirement that the molecular components of an interpreter be synthesized autocatalytically. Within this context, it is seen that interpreters of increasing complexity are generated by series of transitions through stepped dynamic instabilities (non-equilibrium phase transitions). The early phylogeny of the amino acyl-tRNA synthetase enzymes is discussed in such terms, leading to the conclusion that the observed optimality of the genetic code is a natural outcome of the processes of self-organization that produced it. © 2016 The Author(s).
Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports.
Qiu, John X; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D
2018-01-01
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; ...
2016-01-06
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Modeling of negative ion transport in a plasma source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riz, David; Departement de Recherches sur la Fusion Controelee CE Cadarache, 13108 St Paul lez Durance; Pamela, Jerome
1998-08-20
A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The ion trajectory is calculated by numerically solving the 3-D motion equation, while the atomic processes of destruction, of elastic collision H{sup -}/H{sup +} and of charge exchange H{sup -}/H{sup 0} are handled at each time step by a Monte-Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have allowed to explain, either quantitatively or qualitatively, severalmore » phenomena observed in negative ion sources, such as the isotopic H{sup -}/D{sup -} effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm{sup -3}), negative ions can reach the extraction region provided if they are produced at a distance lower than 2 cm from the plasma grid in the case of 'volume production' (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.« less
Modeling of negative ion transport in a plasma source (invited)
NASA Astrophysics Data System (ADS)
Riz, David; Paméla, Jérôme
1998-02-01
A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The H-/D- trajectory is calculated by numerically solving the 3D motion equation, while the atomic processes of destruction, of elastic collision with H+/D+ and of charge exchange with H0/D0 are handled at each time step by a Monte Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have been allowed to explain, either quantitatively or qualitatively, several phenomena observed in negative ion sources, such as the isotopic H-/D- effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that, in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm-3), negative ions can reach the extraction region provided they are produced at a distance lower than 2 cm from the plasma grid in the case of volume production (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.
Modeling of negative ion transport in a plasma source
NASA Astrophysics Data System (ADS)
Riz, David; Paméla, Jérôme
1998-08-01
A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The ion trajectory is calculated by numerically solving the 3-D motion equation, while the atomic processes of destruction, of elastic collision H-/H+ and of charge exchange H-/H0 are handled at each time step by a Monte-Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have allowed to explain, either quantitatively or qualitatively, several phenomena observed in negative ion sources, such as the isotopic H-/D- effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm-3), negative ions can reach the extraction region provided if they are produced at a distance lower than 2 cm from the plasma grid in the case of «volume production» (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.
Analysis of Memory Codes and Cumulative Rehearsal in Observational Learning
ERIC Educational Resources Information Center
Bandura, Albert; And Others
1974-01-01
The present study examined the influence of memory codes varying in meaningfulness and retrievability and cumulative rehearsal on retention of observationally learned responses over increasing temporal intervals. (Editor)
Factors Affecting Christian Parents' School Choice Decision Processes: A Grounded Theory Study
ERIC Educational Resources Information Center
Prichard, Tami G.; Swezey, James A.
2016-01-01
This study identifies factors affecting the decision processes for school choice by Christian parents. Grounded theory design incorporated interview transcripts, field notes, and a reflective journal to analyze themes. Comparative analysis, including open, axial, and selective coding, was used to reduce the coded statements to five code families:…
ERIC Educational Resources Information Center
Emmorey, Karen; Petrich, Jennifer A. F.; Gollan, Tamar H.
2012-01-01
Bilinguals who are fluent in American Sign Language (ASL) and English often produce "code-blends"--simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization…
77 FR 67340 - National Fire Codes: Request for Comments on NFPA's Codes and Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-09
... the process. The Code Revision Process contains four basic steps that are followed for developing new documents as well as revising existing documents. Step 1: Public Input Stage, which results in the First Draft Report (formerly ROP); Step 2: Comment Stage, which results in the Second Draft Report (formerly...
A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding
ERIC Educational Resources Information Center
Cuevas, Joshua; Dawson, Bryan L.
2018-01-01
This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…
Continuities in Reading Acquisition, Reading Skill, and Reading Disability.
ERIC Educational Resources Information Center
Perfetti, Charles A.
1986-01-01
Learning to read depends on eventual mastery of coding procedures, and even skilled reading depends on coding processes low in cost to processing resources. Reading disability may be understood as a point on an ability continuum or a wide range of coding ability. Instructional goals of word reading skill, including rapid and fluent word…
Studies of Particle Wake Potentials in Plasmas
NASA Astrophysics Data System (ADS)
Ellis, Ian; Graziani, Frank; Glosli, James; Strozzi, David; Surh, Michael; Richards, David; Decyk, Viktor; Mori, Warren
2011-10-01
Fast Ignition studies require a detailed understanding of electron scattering, stopping, and energy deposition in plasmas with variable values for the number of particles within a Debye sphere. Presently there is disagreement in the literature concerning the proper description of these processes. Developing and validating proper descriptions requires studying the processes using first-principle electrostatic simulations and possibly including magnetic fields. We are using the particle-particle particle-mesh (PPPM) code ddcMD and the particle-in-cell (PIC) code BEPS to perform these simulations. As a starting point in our study, we examine the wake of a particle passing through a plasma in 3D electrostatic simulations performed with ddcMD and with BEPS using various cell sizes. In this poster, we compare the wakes we observe in these simulations with each other and predictions from Vlasov theory. Prepared by LLNL under Contract DE-AC52-07NA27344 and by UCLA under Grant DE-FG52-09NA29552.
NASA Technical Reports Server (NTRS)
Ivry, Richard B.; Franz, Elizabeth A.; Kingstone, Alan; Johnston, James C.; Null, Cynthia H. (Technical Monitor)
1995-01-01
A callosotomy patient was tested in two dual-task experiments requiring successive speeded responses to lateralized stimuli. In accord with the recent findings of Pashler, O'Brien, Luck, Hillyard, Mangun, and Gazzaniga (in press), the patient showed a robust psychological refractory period effect (PRP) responses on Task 2 were inversely related to the stimulus-onset asynchrony (SOA). However, three aspects of our data indicated that the processing limitations for the patient were different than those observed with control subjects. First, the split-brain patient did not show an increase in reaction time when the two tasks required responses from a common output system (i.e., both manual responses) in comparison to when different output systems were used (i.e., manual-vocal). Second, inconsistent stimulus-response mappings for the two tasks greatly inflated response latencies for the control subjects, but had minimal effect on the performance of the split-brain patient. Third, the consistency manipulation was underadditive with SOA for only the patient, suggesting a later bottleneck in processing following callosotomy than was observed for the control subjects. It is proposed that sectioning the corpus callosum eliminates interference resulting from competing stimulus response codes. Nonetheless, dual-task interference persists for the split-brain subject because a subcortical gate constrains when selected responses can be implemented.
Antoine, Sophie; Ranzini, Mariagrazia; Gebuis, Titia; van Dijck, Jean-Philippe; Gevers, Wim
2017-10-01
A largely substantiated view in the domain of working memory is that the maintenance of serial order is achieved by generating associations of each item with an independent representation of its position, so-called position markers. Recent studies reported that the ordinal position of an item in verbal working memory interacts with spatial processing. This suggests that position markers might be spatial in nature. However, these interactions were so far observed in tasks implying a clear binary categorization of space (i.e., with left and right responses or targets). Such binary categorizations leave room for alternative interpretations, such as congruency between non-spatial categorical codes for ordinal position (e.g., begin and end) and spatial categorical codes for response (e.g., left and right). Here we discard this interpretation by providing evidence that this interaction can also be observed in a task that draws upon a continuous processing of space, the line bisection task. Specifically, bisections are modulated by ordinal position in verbal working memory, with lines bisected more towards the right after retrieving items from the end compared to the beginning of the memorized sequence. This supports the idea that position markers are intrinsically spatial in nature.
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
Chien, Maw-Sheng; Gilbert , Teresa L.; Huang, Chienjin; Landolt, Marsha L.; O'Hara, Patrick J.; Winton, James R.
1992-01-01
The complete sequence coding for the 57-kDa major soluble antigen of the salmonid fish pathogen, Renibacterium salmoninarum, was determined. The gene contained an opening reading frame of 1671 nucleotides coding for a protein of 557 amino acids with a calculated Mr value of 57190. The first 26 amino acids constituted a signal peptide. The deduced sequence for amino acid residues 27–61 was in agreement with the 35 N-terminal amino acid residues determined by microsequencing, suggesting the protein in synthesized as a 557-amino acid precursor and processed to produce a mature protein of Mr 54505. Two regions of the protein contained imperfect direct repeats. The first region contained two copies of an 81-residue repeat, the second contained five copies of an unrelated 25-residue repeat. Also, a perfect inverted repeat (including three in-frame UAA stop codons) was observed at the carboxyl-terminus of the gene.
The effect of word concreteness on recognition memory.
Fliessbach, K; Weis, S; Klaver, P; Elger, C E; Weber, B
2006-09-01
Concrete words that are readily imagined are better remembered than abstract words. Theoretical explanations for this effect either claim a dual coding of concrete words in the form of both a verbal and a sensory code (dual-coding theory), or a more accessible semantic network for concrete words than for abstract words (context-availability theory). However, the neural mechanisms of improved memory for concrete versus abstract words are poorly understood. Here, we investigated the processing of concrete and abstract words during encoding and retrieval in a recognition memory task using event-related functional magnetic resonance imaging (fMRI). As predicted, memory performance was significantly better for concrete words than for abstract words. Abstract words elicited stronger activations of the left inferior frontal cortex both during encoding and recognition than did concrete words. Stronger activation of this area was also associated with successful encoding for both abstract and concrete words. Concrete words elicited stronger activations bilaterally in the posterior inferior parietal lobe during recognition. The left parietal activation was associated with correct identification of old stimuli. The anterior precuneus, left cerebellar hemisphere and the posterior and anterior cingulate cortex showed activations both for successful recognition of concrete words and for online processing of concrete words during encoding. Additionally, we observed a correlation across subjects between brain activity in the left anterior fusiform gyrus and hippocampus during recognition of learned words and the strength of the concreteness effect. These findings support the idea of specific brain processes for concrete words, which are reactivated during successful recognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Liang; Zhao, Yiqing; Hu, Xiaoyan
2014-07-15
Experiments about the observations of stimulated Raman backscatter (SRS) and stimulated Brillouin backscatter (SBS) in Hohlraum were performed on Shenguang-III (SG-III) prototype facility for the first time in 2011. In this paper, relevant experimental results are analyzed for the first time with a one-dimension spectral analysis code, which is developed to study the coexistent process of SRS and SBS in Hohlraum plasma condition. Spectral features of the backscattered light are discussed with different plasma parameters. In the case of empty Hohlraum experiments, simulation results indicate that SBS, which grows fast at the energy deposition region near the Hohlraum wall, ismore » the dominant instability process. The time resolved spectra of SRS and SBS are numerically obtained, which agree with the experimental observations. For the gas-filled Hohlraum experiments, simulation results show that SBS grows fastest in Au plasma and amplifies convectively in C{sub 5}H{sub 12} gas, whereas SRS mainly grows in the high density region of the C{sub 5}H{sub 12} gas. Gain spectra and the spectra of backscattered light are simulated along the ray path, which clearly show the location where the intensity of scattered light with a certain wavelength increases. This work is helpful to comprehend the observed spectral features of SRS and SBS. The experiments and relevant analysis provide references for the ignition target design in future.« less
Standardizing clinical laboratory data for secondary use.
Abhyankar, Swapna; Demner-Fushman, Dina; McDonald, Clement J
2012-08-01
Clinical databases provide a rich source of data for answering clinical research questions. However, the variables recorded in clinical data systems are often identified by local, idiosyncratic, and sometimes redundant and/or ambiguous names (or codes) rather than unique, well-organized codes from standard code systems. This reality discourages research use of such databases, because researchers must invest considerable time in cleaning up the data before they can ask their first research question. Researchers at MIT developed MIMIC-II, a nearly complete collection of clinical data about intensive care patients. Because its data are drawn from existing clinical systems, it has many of the problems described above. In collaboration with the MIT researchers, we have begun a process of cleaning up the data and mapping the variable names and codes to LOINC codes. Our first step, which we describe here, was to map all of the laboratory test observations to LOINC codes. We were able to map 87% of the unique laboratory tests that cover 94% of the total number of laboratory tests results. Of the 13% of tests that we could not map, nearly 60% were due to test names whose real meaning could not be discerned and 29% represented tests that were not yet included in the LOINC table. These results suggest that LOINC codes cover most of laboratory tests used in critical care. We have delivered this work to the MIMIC-II researchers, who have included it in their standard MIMIC-II database release so that researchers who use this database in the future will not have to do this work. Published by Elsevier Inc.
Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III
1996-01-01
Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.
Performance Bounds on Two Concatenated, Interleaved Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce; Dolinar, Samuel
2010-01-01
A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).
Toward Supersonic Retropropulsion CFD Validation
NASA Technical Reports Server (NTRS)
Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl
2011-01-01
This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.
Monte Carlo simulation of ò ó coincidence system using plastic scintillators in 4àgeometry
NASA Astrophysics Data System (ADS)
Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.
2007-09-01
A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.
On ways to overcome the magical capacity limit of working memory.
Turi, Zsolt; Alekseichuk, Ivan; Paulus, Walter
2018-04-01
The ability to simultaneously process and maintain multiple pieces of information is limited. Over the past 50 years, observational methods have provided a large amount of insight regarding the neural mechanisms that underpin the mental capacity that we refer to as "working memory." More than 20 years ago, a neural coding scheme was proposed for working memory. As a result of technological developments, we can now not only observe but can also influence brain rhythms in humans. Building on these novel developments, we have begun to externally control brain oscillations in order to extend the limits of working memory.
Numerical simulations of a nonequilibrium argon plasma in a shock-tube experiment
NASA Technical Reports Server (NTRS)
Cambier, Jean-Luc
1991-01-01
A code developed for the numerical modeling of nonequilibrium radiative plasmas is applied to the simulation of the propagation of strong ionizing shock waves in argon gas. The simulations attempt to reproduce a series of shock-tube experiments which will be used to validate the numerical models and procedures. The ability to perform unsteady simulations makes it possible to observe some fluctuations in the shock propagation, coupled to the kinetic processes. A coupling mechanism by pressure waves, reminiscent of oscillation mechanisms observed in detonation waves, is described. The effect of upper atomic levels is also briefly discussed.
Continuous Codes and Standards Improvement (CCSI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivkin, Carl H; Burgess, Robert M; Buttner, William J
2015-10-21
As of 2014, the majority of the codes and standards required to initially deploy hydrogen technologies infrastructure in the United States have been promulgated. These codes and standards will be field tested through their application to actual hydrogen technologies projects. Continuous codes and standards improvement (CCSI) is a process of identifying code issues that arise during project deployment and then developing codes solutions to these issues. These solutions would typically be proposed amendments to codes and standards. The process is continuous because as technology and the state of safety knowledge develops there will be a need to monitor the applicationmore » of codes and standards and improve them based on information gathered during their application. This paper will discuss code issues that have surfaced through hydrogen technologies infrastructure project deployment and potential code changes that would address these issues. The issues that this paper will address include (1) setback distances for bulk hydrogen storage, (2) code mandated hazard analyses, (3) sensor placement and communication, (4) the use of approved equipment, and (5) system monitoring and maintenance requirements.« less
Optical image encryption based on real-valued coding and subtracting with the help of QR code
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng
2015-08-01
A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.
Stevenson, C; Cutcliffe, J
2006-12-01
Special observation by mental health professionals is the recommended approach for those people deemed as at risk or risky. Recent research and academic writing have challenged the benefits of observing people/patients who are defined as 'at risk', and a more human engagement process is being recommended. Despite this assault, practice has not changed substantively, suggesting a need for a thorough exploration and questioning of the practices and process. The paper outlines three Foucaultian approaches to historical analysis. It applies aspects of Foucault's archaeology/genealogy, discourse and power/knowledge to explore the practices of special observation as a means of controlling risk, especially suicide risk. We identify the regulatory function of the 'gaze', professional codes and government policy in relation to restricting professional practices. We argue that observation can be related to moral therapy, wherein the person relinquishes madness for responsibility through a disciplinary process and, in governing risk, a 'professional industry' is created. The regulation of statements about people with mental health issues are exposed and related to what can be said and done by professionals. Finally, we look at productive power in relation to observation, and how it is intimately related to resistance. We conclude with 'soft' recommendations for practice discursively produced through the writing of the paper.
Study of the Socratic method during cognitive restructuring.
Froján-Parga, María Xesús; Calero-Elvira, Ana; Montaño-Fidalgo, Montserrat
2011-01-01
Cognitive restructuring, in particular in the form of the Socratic method, is widely used by clinicians. However, little research has been published with respect to underlying processes, which has hindered well-accepted explanations of its effectiveness. The aim of this study is to present a new method of analysis of the Socratic method during cognitive restructuring based on the observation of the therapist's verbal behaviour. Using recordings from clinical sessions, 18 sequences were selected in which the Socratic method was applied by six cognitive-behavioural therapists working at a private clinical centre in Madrid. The recordings involved eight patients requiring therapy for various psychological problems. Observations were coded using a category system designed by the authors and that classifies the therapist's verbal behaviour into seven hypothesized functions based on basic behavioural operations. We used the Observer XT software to code the observed sequences. The results are summarized through a preliminary model which considers three different phases of the Socratic method and some functions of the therapist's verbal behaviour in each of these phases: discriminative and reinforcement functions in the starting phase, informative and motivational functions in the course of the debate, and instructional and reinforcement functions in the final phase. We discuss the long-term potential clinical benefits of the current proposal. Copyright © 2010 John Wiley & Sons, Ltd.
Towards a turbulent magnetic dysnamo platform
NASA Astrophysics Data System (ADS)
Flippo, Kirk; Rasmus, Alexander; Li, Hui; Li, Shengtai; Kuranz, Carolyn; Levesque, Joseph; Klein, Sallee; Tzeferacos, Petros
2017-10-01
It is known through astronomical observations that most of the Universe is ionized, magnetized, and often turbulent and filled with jets. One theorized process to create strong magnetic fields and jets is the turbulent magnetic dynamo. The magnetic dynamo is a fundamental process in plasma physics, taking kinetic energy and converting it to magnetic energy and is very important to planetary physics and astrophysics. We report on recent Omega EP experiments to produce platform with a turbulent plume of magnetized material with which to study the turbulent magnetic dynamo process. The laser interaction with the target can seed magnetic fields that can be advected into the plume and amplified to saturation by the turbulent magnetic dynamo process. The experimentally measured plume characteristics are compared to hydro code calculations.
Design of neurophysiologically motivated structures of time-pulse coded neurons
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lobodzinska, Raisa F.
2009-04-01
The common methodology of biologically motivated concept of building of processing sensors systems with parallel input and picture operands processing and time-pulse coding are described in paper. Advantages of such coding for creation of parallel programmed 2D-array structures for the next generation digital computers which require untraditional numerical systems for processing of analog, digital, hybrid and neuro-fuzzy operands are shown. The optoelectronic time-pulse coded intelligent neural elements (OETPCINE) simulation results and implementation results of a wide set of neuro-fuzzy logic operations are considered. The simulation results confirm engineering advantages, intellectuality, circuit flexibility of OETPCINE for creation of advanced 2D-structures. The developed equivalentor-nonequivalentor neural element has power consumption of 10mW and processing time about 10...100us.
Plouff, Donald
2000-01-01
Gravity observations are directly made or are obtained from other sources by the U.S. Geological Survey in order to prepare maps of the anomalous gravity field and consequently to interpret the subsurface distribution of rock densities and associated lithologic or geologic units. Observations are made in the field with gravity meters at new locations and at reoccupations of previously established gravity "stations." This report illustrates an interactively-prompted series of steps needed to convert gravity "readings" to values that are tied to established gravity datums and includes computer programs to implement those steps. Inasmuch as individual gravity readings have small variations, gravity-meter (instrument) drift may not be smoothly variable, and acommodations may be needed for ties to previously established stations, the reduction process is iterative. Decision-making by the program user is prompted by lists of best values and graphical displays. Notes about irregularities of topography, which affect the value of observed gravity but are not shown in sufficient detail on topographic maps, must be recorded in the field. This report illustrates ways to record field notes (distances, heights, and slope angles) and includes computer programs to convert field notes to gravity terrain corrections. This report includes approaches that may serve as models for other applications, for example: portrayal of system flow; style of quality control to document and validate computer applications; lack of dependence on proprietary software except source code compilation; method of file-searching with a dwindling list; interactive prompting; computer code to write directly in the PostScript (Adobe Systems Incorporated) printer language; and high-lighting the four-digit year on the first line of time-dependent data sets for assured Y2K compatibility. Computer source codes provided are written in the Fortran scientific language. In order for the programs to operate, they first must be converted (compiled) into an executable form on the user's computer. Although program testing was done in a UNIX (tradename of American Telephone and Telegraph Company) computer environment, it is anticipated that only a system-dependent date-and-time function may need to be changed for adaptation to other computer platforms that accept standard Fortran code.d del iliscipit volorer sequi ting etue feum zzriliquatum zzriustrud esenibh ex esto esequat.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.
2011-03-01
This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less
NASA Astrophysics Data System (ADS)
Kleusberg, E.; Sarmast, S.; Schlatter, P.; Ivanell, S.; Henningson, D. S.
2016-09-01
The wake structure behind a wind turbine, generated by the spectral element code Nek5000, is compared with that from the finite volume code EllipSys3D. The wind turbine blades are modeled using the actuator line method. We conduct the comparison on two different setups. One is based on an idealized rotor approximation with constant circulation imposed along the blades corresponding to Glauert's optimal operating condition, and the other is the Tjffireborg wind turbine. The focus lies on analyzing the differences in the wake structures entailed by the different codes and corresponding setups. The comparisons show good agreement for the defining parameters of the wake such as the wake expansion, helix pitch and circulation of the helical vortices. Differences can be related to the lower numerical dissipation in Nek5000 and to the domain differences at the rotor center. At comparable resolution Nek5000 yields more accurate results. It is observed that in the spectral element method the helical vortices, both at the tip and root of the actuator lines, retain their initial swirl velocity distribution for a longer distance in the near wake. This results in a lower vortex core growth and larger maximum vorticity along the wake. Additionally, it is observed that the break down process of the spiral tip vortices is significantly different between the two methods, with vortex merging occurring immediately after the onset of instability in the finite volume code, while Nek5000 simulations exhibit a 2-3 radii period of vortex pairing before merging.
Dual Roles for Spike Signaling in Cortical Neural Populations
Ballard, Dana H.; Jehee, Janneke F. M.
2011-01-01
A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798
Processing Code-Switching in Algerian Bilinguals: Effects of Language Use and Semantic Expectancy
Kheder, Souad; Kaan, Edith
2016-01-01
Using a cross-modal naming paradigm this study investigated the effect of sentence constraint and language use on the expectancy of a language switch during listening comprehension. Sixty-five Algerian bilinguals who habitually code-switch between Algerian Arabic and French (AA-FR) but not between Standard Arabic and French (SA-FR) listened to sentence fragments and named a visually presented French target NP out loud. Participants’ speech onset times were recorded. The sentence context was either highly semantically constraining toward the French NP or not. The language of the sentence context was either in Algerian Arabic or in Standard Arabic, but the target NP was always in French, thus creating two code-switching contexts: a typical and recurrent code-switching context (AA-FR) and a non-typical code-switching context (SA-FR). Results revealed a semantic constraint effect indicating that the French switches were easier to process in the high compared to the low-constraint context. In addition, the effect size of semantic constraint was significant in the more typical code-switching context (AA-FR) suggesting that language use influences the processing of switching between languages. The effect of semantic constraint was also modulated by code-switching habits and the proficiency of L2 French. Semantic constraint was reduced in bilinguals who frequently code-switch and in bilinguals with high proficiency in French. Results are discussed with regards to the bilingual interactive activation model (Dijkstra and Van Heuven, 2002) and the control process model of code-switching (Green and Wei, 2014). PMID:26973559
Quick Response codes for surgical safety: a prospective pilot study.
Dixon, Jennifer L; Smythe, William Roy; Momsen, Lara S; Jupiter, Daniel; Papaconstantinou, Harry T
2013-09-01
Surgical safety programs have been shown to reduce patient harm; however, there is variable compliance. The purpose of this study is to determine if innovative technology such as Quick Response (QR) codes can facilitate surgical safety initiatives. We prospectively evaluated the use of QR codes during the surgical time-out for 40 operations. Feasibility and accuracy were assessed. Perceptions of the current time-out process and the QR code application were evaluated through surveys using a 5-point Likert scale and binomial yes or no questions. At baseline (n = 53), survey results from the surgical team agreed or strongly agreed that the current time-out process was efficient (64%), easy to use (77%), and provided clear information (89%). However, 65% of surgeons felt that process improvements were needed. Thirty-seven of 40 (92.5%) QR codes scanned successfully, of which 100% were accurate. Three scan failures resulted from excessive curvature or wrinkling of the QR code label on the body. Follow-up survey results (n = 33) showed that the surgical team agreed or strongly agreed that the QR program was clearer (70%), easier to use (57%), and more accurate (84%). Seventy-four percent preferred the QR system to the current time-out process. QR codes accurately transmit patient information during the time-out procedure and are preferred to the current process by surgical team members. The novel application of this technology may improve compliance, accuracy, and outcomes. Copyright © 2013 Elsevier Inc. All rights reserved.
Verification of Software: The Textbook and Real Problems
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee
2006-01-01
The process of verification, or determining the order of accuracy of computational codes, can be problematic when working with large, legacy computational methods that have been used extensively in industry or government. Verification does not ensure that the computer program is producing a physically correct solution, it ensures merely that the observed order of accuracy of solutions are the same as the theoretical order of accuracy. The Method of Manufactured Solutions (MMS) is one of several ways for determining the order of accuracy. MMS is used to verify a series of computer codes progressing in sophistication from "textbook" to "real life" applications. The degree of numerical precision in the computations considerably influenced the range of mesh density to achieve the theoretical order of accuracy even for 1-D problems. The choice of manufactured solutions and mesh form shifted the observed order in specific areas but not in general. Solution residual (iterative) convergence was not always achieved for 2-D Euler manufactured solutions. L(sub 2,norm) convergence differed variable to variable therefore an observed order of accuracy could not be determined conclusively in all cases, the cause of which is currently under investigation.
ERIC Educational Resources Information Center
Meyer, Linda A.; And Others
This manual describes the model--specifically the observation procedures and coding systems--used in a longitudinal study of how children learn to comprehend what they read, with particular emphasis on science texts. Included are procedures for the following: identifying students; observing--recording observations and diagraming the room; writing…
The SCEC/USGS dynamic earthquake rupture code verification exercise
Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.
2009-01-01
Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.
The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less
Examining the relationship between comprehension and production processes in code-switched language
Guzzardo Tamargo, Rosa E.; Valdés Kroff, Jorge R.; Dussias, Paola E.
2016-01-01
We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish–English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants’ comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension. PMID:28670049
Examining the relationship between comprehension and production processes in code-switched language.
Guzzardo Tamargo, Rosa E; Valdés Kroff, Jorge R; Dussias, Paola E
2016-08-01
We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish-English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants' comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension.
Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang
2016-10-01
The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.
Comparing Acquisition Strategies: Open Architecture versus Product Lines
2010-04-30
software • New SOW language for accepting software deliveries – Enables third-party reuse • Additional SOW language regarding conducting software code walkthroughs and for using integrated development environments ...change the business environment must be the primary factor that drives the technical approach. Accordingly, there are business case decisions to be...elements of a system design should be made available to the customer to observe throughout the design process. Electronic access to the design environment
NASA Astrophysics Data System (ADS)
Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick
2013-06-01
The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.
Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process
NASA Technical Reports Server (NTRS)
McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.
1999-01-01
This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.
ERIC Educational Resources Information Center
Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela
2015-01-01
Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…
NASA Astrophysics Data System (ADS)
Qiu, Kun; Zhang, Chongfu; Ling, Yun; Wang, Yibo
2007-11-01
This paper proposes an all-optical label processing scheme using multiple optical orthogonal codes sequences (MOOCS) for optical packet switching (OPS) (MOOCS-OPS) networks, for the first time to the best of our knowledge. In this scheme, the multiple optical orthogonal codes (MOOC) from multiple-groups optical orthogonal codes (MGOOC) are permuted and combined to obtain the MOOCS for the optical labels, which are used to effectively enlarge the capacity of available optical codes for optical labels. The optical label processing (OLP) schemes are reviewed and analyzed, the principles of MOOCS-based optical labels for OPS networks are given, and analyzed, then the MOOCS-OPS topology and the key realization units of the MOOCS-based optical label packets are studied in detail, respectively. The performances of this novel all-optical label processing technology are analyzed, the corresponding simulation is performed. These analysis and results show that the proposed scheme can overcome the lack of available optical orthogonal codes (OOC)-based optical labels due to the limited number of single OOC for optical label with the short code length, and indicate that the MOOCS-OPS scheme is feasible.
Metzker, Manja; Dreisbach, Gesine
2011-06-01
Recently, it was proposed that the Simon effect would result not only from two interfering processes, as classical dual-route models assume, but from three processes. It was argued that priming from the spatial code to the nonspatial code might facilitate the identification of the nonspatial stimulus feature in congruent Simon trials. In the present study, the authors provide evidence that the identification of the nonspatial information can be facilitated by the activation of an associated spatial code. In three experiments, participants first associated centrally presented animal and fruit pictures with spatial responses. Subsequently, participants decided whether laterally presented letter strings were words (animal, fruit, or other words) or nonwords; stimulus position could be congruent or incongruent to the associated spatial code. As hypothesized, animal and fruit words were identified faster at congruent than at incongruent stimulus positions from the association phase. The authors conclude that the activation of the spatial code spreads to the nonspatial code, resulting in facilitated stimulus identification in congruent trials. These results speak to the assumption of a third process involved in the Simon task.
Zhou, Yingbiao; Zhu, Yueming; Dai, Longhai; Men, Yan; Wu, Jinhai; Zhang, Juankun; Sun, Yuanxia
2017-01-01
Melibiose is widely used as a functional carbohydrate. Whole-cell biocatalytic production of melibiose from raffinose could reduce its cost. However, characteristics of strains for whole-cell biocatalysis and mechanism of such process are unclear. We compared three different Saccharomyces cerevisiae strains (liquor, wine, and baker's yeasts) in terms of concentration variations of substrate (raffinose), target product (melibiose), and by-products (fructose and galactose) in whole-cell biocatalysis process. Distinct difference was observed in whole-cell catalytic efficiency among three strains. Furthermore, activities of key enzymes (invertase, α-galactosidase, and fructose transporter) involved in process and expression levels of their coding genes (suc2, mel1, and fsy1) were investigated. Conservation of key genes in S. cerevisiae strains was also evaluated. Results show that whole-cell catalytic efficiency of S. cerevisiae in the raffinose substrate was closely related to activity of key enzymes and expression of their coding genes. Finally, we summarized characteristics of producing strain that offered advantages, as well as contributions of key genes to excellent strains. Furthermore, we presented a dynamic mechanism model to achieve some mechanism insight for this whole-cell biocatalytic process. This pioneering study should contribute to improvement of whole-cell biocatalytic production of melibiose from raffinose.
Migration of tungsten dust in tokamaks: role of dust-wall collisions
NASA Astrophysics Data System (ADS)
Ratynskaia, S.; Vignitchouk, L.; Tolias, P.; Bykov, I.; Bergsåker, H.; Litnovsky, A.; den Harder, N.; Lazzaro, E.
2013-12-01
The modelling of a controlled tungsten dust injection experiment in TEXTOR by the dust dynamics code MIGRAINe is reported. The code, in addition to the standard dust-plasma interaction processes, also encompasses major mechanical aspects of dust-surface collisions. The use of analytical expressions for the restitution coefficients as functions of the dust radius and impact velocity allows us to account for the sticking and rebound phenomena that define which parts of the dust size distribution can migrate efficiently. The experiment provided unambiguous evidence of long-distance dust migration; artificially introduced tungsten dust particles were collected 120° toroidally away from the injection point, but also a selectivity in the permissible size of transported grains was observed. The main experimental results are reproduced by modelling.
Propagation of spiking regularity and double coherence resonance in feedforward networks.
Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok
2012-03-01
We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.
Equality marker in the language of bali
NASA Astrophysics Data System (ADS)
Wajdi, Majid; Subiyanto, Paulus
2018-01-01
The language of Bali could be grouped into one of the most elaborate languages of the world since the existence of its speech levels, low and high speech levels, as the language of Java has. Low and high speech levels of the language of Bali are language codes that could be used to show and express social relationship between or among its speakers. This paper focuses on describing, analyzing, and interpreting the use of the low code of the language of Bali in daily communication in the speech community of Pegayaman, Bali. Observational and documentation methods were applied to provide the data for the research. Recoding and field note techniques were executed to provide the data. Recorded in spoken language and the study of novel of Balinese were transcribed into written form to ease the process of analysis. Symmetric use of low code expresses social equality between or among the participants involves in the communication. It also implies social intimacy between or among the speakers of the language of Bali. Regular and patterned use of the low code of the language of Bali is not merely communication strategy, but it is a kind of communication agreement or communication contract between the participants. By using low code during their social and communication activities, the participants shared and express their social equality and intimacy between or among the participants involve in social and communication activities.
Bergin, Michael
2011-01-01
Qualitative data analysis is a complex process and demands clear thinking on the part of the analyst. However, a number of deficiencies may obstruct the research analyst during the process, leading to inconsistencies occurring. This paper is a reflection on the use of a qualitative data analysis program, NVivo 8, and its usefulness in identifying consistency and inconsistency during the coding process. The author was conducting a large-scale study of providers and users of mental health services in Ireland. He used NVivo 8 to store, code and analyse the data and this paper reflects some of his observations during the study. The demands placed on the analyst in trying to balance the mechanics of working through a qualitative data analysis program, while simultaneously remaining conscious of the value of all sources are highlighted. NVivo 8 as a qualitative data analysis program is a challenging but valuable means for advancing the robustness of qualitative research. Pitfalls can be avoided during analysis by running queries as the analyst progresses from tree node to tree node rather than leaving it to a stage whereby data analysis is well advanced.
The Herschel Data Processing System - Hipe And Pipelines - During The Early Mission Phase
NASA Astrophysics Data System (ADS)
Ardila, David R.; Herschel Science Ground Segment Consortium
2010-01-01
The Herschel Space Observatory, the fourth cornerstone mission in the ESA science program, was launched 14th of May 2009. With a 3.5 m telescope, it is the largest space telescope ever launched. Herschel's three instruments (HIFI, PACS, and SPIRE) perform photometry and spectroscopy in the 55 - 672 micron range and will deliver exciting science for the astronomical community during at least three years of routine observations. Here we summarize the state of the Herschel Data Processing System and give an overview about future development milestones and plans. The development of the Herschel Data Processing System started seven years ago to support the data analysis for Instrument Level Tests. Resources were made available to implement a freely distributable Data Processing System capable of interactively and automatically reduce Herschel data at different processing levels. The system combines data retrieval, pipeline execution and scientific analysis in one single environment. The software is coded in Java and Jython to be platform independent and to avoid the need for commercial licenses. The Herschel Interactive Processing Environment (HIPE) is the user-friendly face of Herschel Data Processing. The first PACS preview observation of M51 was processed with HIPE, using basic pipeline scripts to a fantastic image within 30 minutes of data reception. Also the first HIFI observations on DR-21 were successfully reduced to high quality spectra, followed by SPIRE observations on M66 and M74. The Herschel Data Processing System is a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortium members.
NASA Astrophysics Data System (ADS)
Winteler, Christian
2014-02-01
In this dissertation we present the main features of a new nuclear reaction network evolution code. This new code allows nucleosynthesis calculations for large numbers of nuclides. The main results in this dissertation are all obtained using this new code. The strength of standard big bang nucleosynthesis is, that all primordial abundances are determined by only one free parameter, the baryon-to-photon ratio η. We perform self consistent nucleosynthesis calculations for the latest WMAP value η = (6.16±0.15)×10^-10 . We predict primordial light element abundances: D/H = (2.84 ± 0.23)×10^-5, 3He/H = (1.07 ± 0.09)×10^-5, Yp = 0.2490±0.0005 and 7Li/H = (4.57 ± 0.55)×10^-10, in agreement with current observations and other predictions. We investigate the influence of the main production rate on the 6 Li abundance, but find no significant increase of the predicted value, which is known to be orders of magnitude lower than the observed. The r-process is responsible for the formation of about half of the elements heavier than iron in our solar system. This neutron capture process requires explosive environments with large neutron densities. The exact astrophysical site where the r-process occurs has not yet been identified. We explore jets from magnetorotational core collapse supernovae (MHD jets) as possible r-process site. In a parametric study, assuming adiabatic expansion, we find good agreement with solar system abundances for a superposition of components with different electron fraction (Ye ), ranging from Ye = 0.1 to Ye = 0.3. Fission is found to be important only for Ye ≤ 0.17. The first postprocessing calculations with data from 3D MHD core collapse supernova simulations are performed for two different simulations. Calculations are based on two different methods to extract data from the simulation: tracer particles and a two dimensional, mass weighted histogram. Both results yield almost identical results. We find that both simulations can reproduce the global solar r-process abundance pattern. The ejected mass is found to be in agreement with galactic chemical evolution for a rare event rate of one MHD jet every hundredth to thousandth supernova.
NASA Astrophysics Data System (ADS)
Granato, Gian Luigi; Ragone-Figueroa, Cinthia; Domínguez-Tenreiro, Rosa; Obreja, Aura; Borgani, Stefano; De Lucia, Gabriella; Murante, Giuseppe
2015-06-01
We compute and study the infrared and sub-mm properties of high-redshift (z ≳ 1) simulated clusters and protoclusters. The results of a large set of hydrodynamical zoom-in simulations including active galactic nuclei (AGN) feedback, have been treated with the recently developed radiative transfer code GRASIL-3D, which accounts for the effect of dust reprocessing in an arbitrary geometry. Here, we have slightly generalized the code to adapt it to the present purpose. Then we have post-processed boxes of physical size 2 Mpc encompassing each of the 24 most massive clusters identified at z = 0, at several redshifts between 0.5 and 3, producing IR and sub-mm mock images of these regions and spectral energy distributions (SEDs) of the radiation coming out from them. While this field is in its infancy from the observational point of view, rapid development is expected in the near future thanks to observations performed in the far-IR and sub-mm bands. Notably, we find that in this spectral regime our prediction are little affected by the assumption required by this post-processing, and the emission is mostly powered by star formation (SF) rather than accretion on to super massive black hole (SMBH). The comparison with the little observational information currently available, highlights that the simulated cluster regions never attain the impressive star formation rates suggested by these observations. This problem becomes more intriguing taking into account that the brightest cluster galaxies (BCGs) in the same simulations turn out to be too massive. It seems that the interplay between the feedback schemes and the star formation model should be revised, possibly incorporating a positive feedback mode.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
... Process To Develop Consumer Data Privacy Code of Conduct Concerning Mobile Application Transparency AGENCY... convene the first meeting of a privacy multistakeholder process concerning mobile application transparency... concerning mobile application transparency. Stakeholders will engage in an open, transparent, consensus...
Automated encoding of clinical documents based on natural language processing.
Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George
2004-01-01
The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.
OOSTethys - Open Source Software for the Global Earth Observing Systems of Systems
NASA Astrophysics Data System (ADS)
Bridger, E.; Bermudez, L. E.; Maskey, M.; Rueda, C.; Babin, B. L.; Blair, R.
2009-12-01
An open source software project is much more than just picking the right license, hosting modular code and providing effective documentation. Success in advancing in an open collaborative way requires that the process match the expected code functionality to the developer's personal expertise and organizational needs as well as having an enthusiastic and responsive core lead group. We will present the lessons learned fromOOSTethys , which is a community of software developers and marine scientists who develop open source tools, in multiple languages, to integrate ocean observing systems into an Integrated Ocean Observing System (IOOS). OOSTethys' goal is to dramatically reduce the time it takes to install, adopt and update standards-compliant web services. OOSTethys has developed servers, clients and a registry. Open source PERL, PYTHON, JAVA and ASP tool kits and reference implementations are helping the marine community publish near real-time observation data in interoperable standard formats. In some cases publishing an OpenGeospatial Consortium (OGC), Sensor Observation Service (SOS) from NetCDF files or a database or even CSV text files could take only minutes depending on the skills of the developer. OOSTethys is also developing an OGC standard registry, Catalog Service for Web (CSW). This open source CSW registry was implemented to easily register and discover SOSs using ISO 19139 service metadata. A web interface layer over the CSW registry simplifies the registration process by harvesting metadata describing the observations and sensors from the “GetCapabilities” response of SOS. OPENIOOS is the web client, developed in PERL to visualize the sensors in the SOS services. While the number of OOSTethys software developers is small, currently about 10 around the world, the number of OOSTethys toolkit implementers is larger and growing and the ease of use has played a large role in spreading the use of interoperable standards compliant web services widely in the marine community.
LDPC-PPM Coding Scheme for Optical Communication
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns
NASA Technical Reports Server (NTRS)
Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.
2006-01-01
Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.
The mathematical theory of signal processing and compression-designs
NASA Astrophysics Data System (ADS)
Feria, Erlan H.
2006-05-01
The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.
Progress on China nuclear data processing code system
NASA Astrophysics Data System (ADS)
Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu
2017-09-01
China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.
Schmithorst, Vincent J; Brown, Rhonda Douglas
2004-07-01
The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.
Secure ADS-B authentication system and method
NASA Technical Reports Server (NTRS)
Viggiano, Marc J (Inventor); Valovage, Edward M (Inventor); Samuelson, Kenneth B (Inventor); Hall, Dana L (Inventor)
2010-01-01
A secure system for authenticating the identity of ADS-B systems, including: an authenticator, including a unique id generator and a transmitter transmitting the unique id to one or more ADS-B transmitters; one or more ADS-B transmitters, including a receiver receiving the unique id, one or more secure processing stages merging the unique id with the ADS-B transmitter's identification, data and secret key and generating a secure code identification and a transmitter transmitting a response containing the secure code and ADSB transmitter's data to the authenticator; the authenticator including means for independently determining each ADS-B transmitter's secret key, a receiver receiving each ADS-B transmitter's response, one or more secure processing stages merging the unique id, ADS-B transmitter's identification and data and generating a secure code, and comparison processing comparing the authenticator-generated secure code and the ADS-B transmitter-generated secure code and providing an authentication signal based on the comparison result.
Advanced Imaging Optics Utilizing Wavefront Coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen
2015-06-01
Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less
Global GNSS processing based on the raw observation approach
NASA Astrophysics Data System (ADS)
Strasser, Sebastian; Zehentner, Norbert; Mayer-Gürr, Torsten
2017-04-01
Many global navigation satellite system (GNSS) applications, e.g. Precise Point Positioning (PPP), require high-quality GNSS products, such as precise GNSS satellite orbits and clocks. These products are routinely determined by analysis centers of the International GNSS Service (IGS). The current processing methods of the analysis centers make use of the ionosphere-free linear combination to reduce the ionospheric influence. Some of the analysis centers also form observation differences, in general double-differences, to eliminate several additional error sources. The raw observation approach is a new GNSS processing approach that was developed at Graz University of Technology for kinematic orbit determination of low Earth orbit (LEO) satellites and subsequently adapted to global GNSS processing in general. This new approach offers some benefits compared to well-established approaches, such as a straightforward incorporation of new observables due to the avoidance of observation differences and linear combinations. This becomes especially important in view of the changing GNSS landscape with two new systems, the European system Galileo and the Chinese system BeiDou, currently in deployment. GNSS products generated at Graz University of Technology using the raw observation approach currently comprise precise GNSS satellite orbits and clocks, station positions and clocks, code and phase biases, and Earth rotation parameters. To evaluate the new approach, products generated using the Global Positioning System (GPS) constellation and observations from the global IGS station network are compared to those of the IGS analysis centers. The comparisons show that the products generated at Graz University of Technology are on a similar level of quality to the products determined by the IGS analysis centers. This confirms that the raw observation approach is applicable to global GNSS processing. Some areas requiring further work have been identified, enabling future improvements of the method.
GENESIS: new self-consistent models of exoplanetary spectra
NASA Astrophysics Data System (ADS)
Gandhi, Siddharth; Madhusudhan, Nikku
2017-12-01
We are entering the era of high-precision and high-resolution spectroscopy of exoplanets. Such observations herald the need for robust self-consistent spectral models of exoplanetary atmospheres to investigate intricate atmospheric processes and to make observable predictions. Spectral models of plane-parallel exoplanetary atmospheres exist, mostly adapted from other astrophysical applications, with different levels of sophistication and accuracy. There is a growing need for a new generation of models custom-built for exoplanets and incorporating state-of-the-art numerical methods and opacities. The present work is a step in this direction. Here we introduce GENESIS, a plane-parallel, self-consistent, line-by-line exoplanetary atmospheric modelling code that includes (a) formal solution of radiative transfer using the Feautrier method, (b) radiative-convective equilibrium with temperature correction based on the Rybicki linearization scheme, (c) latest absorption cross-sections, and (d) internal flux and external irradiation, under the assumptions of hydrostatic equilibrium, local thermodynamic equilibrium and thermochemical equilibrium. We demonstrate the code here with cloud-free models of giant exoplanetary atmospheres over a range of equilibrium temperatures, metallicities, C/O ratios and spanning non-irradiated and irradiated planets, with and without thermal inversions. We provide the community with theoretical emergent spectra and pressure-temperature profiles over this range, along with those for several known hot Jupiters. The code can generate self-consistent spectra at high resolution and has the potential to be integrated into general circulation and non-equilibrium chemistry models as it is optimized for efficiency and convergence. GENESIS paves the way for high-fidelity remote sensing of exoplanetary atmospheres at high resolution with current and upcoming observations.
Memory for pictures and words as a function of level of processing: Depth or dual coding?
D'Agostino, P R; O'Neill, B J; Paivio, A
1977-03-01
The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural < phonemic
Common Envelope Light Curves. I. Grid-code Module Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galaviz, Pablo; Marco, Orsola De; Staff, Jan E.
The common envelope (CE) binary interaction occurs when a star transfers mass onto a companion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The CE interaction is the gateway of all evolved compact binaries, all stellar mergers, and likely many of the stellar transients witnessed to date. CE simulations are needed to understand this interaction and to interpret stars and binaries thought to be the byproduct of this stage. At this time, simulations are unable to reproduce the few observational data available and several ideas have been putmore » forward to address their shortcomings. The need for more definitive simulation validation is pressing and is already being fulfilled by observations from time-domain surveys. In this article, we present an initial method and its implementation for post-processing grid-based CE simulations to produce the light curve so as to compare simulations with upcoming observations. Here we implemented a zeroth order method to calculate the light emitted from CE hydrodynamic simulations carried out with the 3D hydrodynamic code Enzo used in unigrid mode. The code implements an approach for the computation of luminosity in both optically thick and optically thin regimes and is tested using the first 135 days of the CE simulation of Passy et al., where a 0.8 M {sub ⊙} red giant branch star interacts with a 0.6 M {sub ⊙} companion. This code is used to highlight two large obstacles that need to be overcome before realistic light curves can be calculated. We explain the nature of these problems and the attempted solutions and approximations in full detail to enable the next step to be identified and implemented. We also discuss our simulation in relation to recent data of transients identified as CE interactions.« less
NASA Astrophysics Data System (ADS)
Li, Xiaoyi; Gao, Hui; Soteriou, Marios C.
2017-08-01
Atomization of extremely high viscosity liquid can be of interest for many applications in aerospace, automotive, pharmaceutical, and food industries. While detailed atomization measurements usually face grand challenges, high-fidelity numerical simulations offer the advantage to comprehensively explore the atomization details. In this work, a previously validated high-fidelity first-principle simulation code HiMIST is utilized to simulate high-viscosity liquid jet atomization in crossflow. The code is used to perform a parametric study of the atomization process in a wide range of Ohnesorge numbers (Oh = 0.004-2) and Weber numbers (We = 10-160). Direct comparisons between the present study and previously published low-viscosity jet in crossflow results are performed. The effects of viscous damping and slowing on jet penetration, liquid surface instabilities, ligament formation/breakup, and subsequent droplet formation are investigated. Complex variations in near-field and far-field jet penetrations with increasing Oh at different We are observed and linked with the underlying jet deformation and breakup physics. Transition in breakup regimes and increase in droplet size with increasing Oh are observed, mostly consistent with the literature reports. The detailed simulations elucidate a distinctive edge-ligament-breakup dominated process with long surviving ligaments for the higher Oh cases, as opposed to a two-stage edge-stripping/column-fracture process for the lower Oh counterparts. The trend of decreasing column deflection with increasing We is reversed as Oh increases. A predominantly unimodal droplet size distribution is predicted at higher Oh, in contrast to the bimodal distribution at lower Oh. It has been found that both Rayleigh-Taylor and Kelvin-Helmholtz linear stability theories cannot be easily applied to interpret the distinct edge breakup process and further study of the underlying physics is needed.
A review of predictive coding algorithms.
Spratling, M W
2017-03-01
Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.
2018-01-01
The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/
Energy coding in biological neural networks
Zhang, Zhikang
2007-01-01
According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, we present a brand new scientific theory that offers an unique mechanism for brain information processing. We demonstrate that the neural coding produced by the activity of the brain is well described by our theory of energy coding. Due to the energy coding model’s ability to reveal mechanisms of brain information processing based upon known biophysical properties, we can not only reproduce various experimental results of neuro-electrophysiology, but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, we estimate that the theory has very important consequences for quantitative research of cognitive function. PMID:19003513
Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU
NASA Astrophysics Data System (ADS)
Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.
1982-06-01
In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.
Orthographic Coding: Brain Activation for Letters, Symbols, and Digits.
Carreiras, Manuel; Quiñones, Ileana; Hernández-Cabrera, Juan Andrés; Duñabeitia, Jon Andoni
2015-12-01
The present experiment investigates the input coding mechanisms of 3 common printed characters: letters, numbers, and symbols. Despite research in this area, it is yet unclear whether the identity of these 3 elements is processed through the same or different brain pathways. In addition, some computational models propose that the position-in-string coding of these elements responds to general flexible mechanisms of the visual system that are not character-specific, whereas others suggest that the position coding of letters responds to specific processes that are different from those that guide the position-in-string assignment of other types of visual objects. Here, in an fMRI study, we manipulated character position and character identity through the transposition or substitution of 2 internal elements within strings of 4 elements. Participants were presented with 2 consecutive visual strings and asked to decide whether they were the same or different. The results showed: 1) that some brain areas responded more to letters than to numbers and vice versa, suggesting that processing may follow different brain pathways; 2) that the left parietal cortex is involved in letter identity, and critically in letter position coding, specifically contributing to the early stages of the reading process; and that 3) a stimulus-specific mechanism for letter position coding is operating during orthographic processing. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Liu, Teng; Zhang, Baocheng; Yuan, Yunbin; Li, Min
2018-01-01
Precise Point Positioning (PPP) is an absolute positioning technology mainly used in post data processing. With the continuously increasing demand for real-time high-precision applications in positioning, timing, retrieval of atmospheric parameters, etc., Real-Time PPP (RTPPP) and its applications have drawn more and more research attention in recent years. This study focuses on the models, algorithms and ionospheric applications of RTPPP on the basis of raw observations, in which high-precision slant ionospheric delays are estimated among others in real time. For this purpose, a robust processing strategy for multi-station RTPPP with raw observations has been proposed and realized, in which real-time data streams and State-Space-Representative (SSR) satellite orbit and clock corrections are used. With the RTPPP-derived slant ionospheric delays from a regional network, a real-time regional ionospheric Vertical Total Electron Content (VTEC) modeling method is proposed based on Adjusted Spherical Harmonic Functions and a Moving-Window Filter. SSR satellite orbit and clock corrections from different IGS analysis centers are evaluated. Ten globally distributed real-time stations are used to evaluate the positioning performances of the proposed RTPPP algorithms in both static and kinematic modes. RMS values of positioning errors in static/kinematic mode are 5.2/15.5, 4.7/17.4 and 12.8/46.6 mm, for north, east and up components, respectively. Real-time slant ionospheric delays from RTPPP are compared with those from the traditional Carrier-to-Code Leveling (CCL) method, in terms of function model, formal precision and between-receiver differences of short baseline. Results show that slant ionospheric delays from RTPPP are more precise and have a much better convergence performance than those from the CCL method in real-time processing. 30 real-time stations from the Asia-Pacific Reference Frame network are used to model the ionospheric VTECs over Australia in real time, with slant ionospheric delays from both RTPPP and CCL methods for comparison. RMS of the VTEC differences between RTPPP/CCL method and CODE final products is 0.91/1.09 TECU, and RMS of the VTEC differences between RTPPP and CCL methods is 0.67 TECU. Slant Total Electron Contents retrieved from different VTEC models are also validated with epoch-differenced Geometry-Free combinations of dual-frequency phase observations, and mean RMS values are 2.14, 2.33 and 2.07 TECU for RTPPP method, CCL method and CODE final products, respectively. This shows the superiority of RTPPP-derived slant ionospheric delays in real-time ionospheric VTEC modeling.
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
An Efficient Method for Verifying Gyrokinetic Microstability Codes
NASA Astrophysics Data System (ADS)
Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.
2009-11-01
Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.
IAC-POP: FINDING THE STAR FORMATION HISTORY OF RESOLVED GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aparicio, Antonio; Hidalgo, Sebastian L.
2009-08-15
IAC-pop is a code designed to solve the star formation history (SFH) of a complex stellar population system, like a galaxy, from the analysis of the color-magnitude diagram (CMD). It uses a genetic algorithm to minimize a {chi}{sup 2} merit function comparing the star distributions in the observed CMD and the CMD of a synthetic stellar population. A parameterization of the CMDs is used, which is the main input of the code. In fact, the code can be applied to any problem in which a similar parameterization of an experimental set of data and models can be made. The method'smore » internal consistency and robustness against several error sources, including observational effects, data sampling, and stellar evolution library differences, are tested. It is found that the best stability of the solution and the best way to estimate errors are obtained by several runs of IAC-pop with varying the input data parameterization. The routine MinnIAC is used to control this process. IAC-pop is offered for free use and can be downloaded from the site http://iac-star.iac.es/iac-pop. The routine MinnIAC is also offered under request, but support cannot be provided for its use. The only requirement for the use of IAC-pop and MinnIAC is referencing this paper and crediting as indicated in the site.« less
Professional Ethics in Teaching: Towards the Development of a Code of Practice.
ERIC Educational Resources Information Center
Campbell, Elizabeth
2000-01-01
Provides a theoretical discussion about the process of creating a professional code of ethics for educators. Discusses six key issues and questions, introducing the development of a code of professional ethics and the complexities the code should address. Includes references. (CMK)
The Development of the World Anti-Doping Code.
Young, Richard
2017-01-01
This chapter addresses both the development and substance of the World Anti-Doping Code, which came into effect in 2003, as well as the subsequent Code amendments, which came into effect in 2009 and 2015. Through an extensive process of stakeholder input and collaboration, the World Anti-Doping Code has transformed the hodgepodge of inconsistent and competing pre-2003 anti-doping rules into a harmonized and effective approach to anti-doping. The Code, as amended, is now widely recognized worldwide as the gold standard in anti-doping. The World Anti-Doping Code originally went into effect on January 1, 2004. The first amendments to the Code went into effect on January 1, 2009, and the second amendments on January 1, 2015. The Code and the related international standards are the product of a long and collaborative process designed to make the fight against doping more effective through the adoption and implementation of worldwide harmonized rules and best practices. © 2017 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Hsueh, Yu-Li; Rogge, Matthew S.; Shaw, Wei-Tao; Kim, Jaedon; Yamamoto, Shu; Kazovsky, Leonid G.
2005-09-01
A simple and cost-effective upgrade of existing passive optical networks (PONs) is proposed, which realizes service overlay by novel spectral-shaping line codes. A hierarchical coding procedure allows processing simplicity and achieves desired long-term spectral properties. Different code rates are supported, and the spectral shape can be properly tailored to adapt to different systems. The computation can be simplified by quantization of trigonometric functions. DC balance is achieved by passing the dc residual between processing windows. The proposed line codes tend to introduce bit transitions to avoid long consecutive identical bits and facilitate receiver clock recovery. Experiments demonstrate and compare several different optimized line codes. For a specific tolerable interference level, the optimal line code can easily be determined, which maximizes the data throughput. The service overlay using the line-coding technique leaves existing services and field-deployed fibers untouched but fully functional, providing a very flexible and economic way to upgrade existing PONs.
Cerebral Laterality and Verbal Processes
ERIC Educational Resources Information Center
Sherman, Jay L.; And Others
1976-01-01
Research suggests that we process information by way of two distinct and functionally separate coding systems. Their location, somewhat dependent on cerebral laterality, varies in right- and left-handed persons. Tests this dual coding model. (Editor/RK)
Antúnez, Lucía; Giménez, Ana; Maiche, Alejandro; Ares, Gastón
2015-01-01
To study the influence of 2 interpretational aids of front-of-package (FOP) nutrition labels (color code and text descriptors) on attentional capture and consumers' understanding of nutritional information. A full factorial design was used to assess the influence of color code and text descriptors using visual search and eye tracking. Ten trained assessors participated in the visual search study and 54 consumers completed the eye-tracking study. In the visual search study, assessors were asked to indicate whether there was a label high in fat within sets of mayonnaise labels with different FOP labels. In the eye-tracking study, assessors answered a set of questions about the nutritional content of labels. The researchers used logistic regression to evaluate the influence of interpretational aids of FOP nutrition labels on the percentage of correct answers. Analyses of variance were used to evaluate the influence of the studied variables on attentional measures and participants' response times. Response times were significantly higher for monochromatic FOP labels compared with color-coded ones (3,225 vs 964 ms; P < .001), which suggests that color codes increase attentional capture. The highest number and duration of fixations and visits were recorded on labels that did not include color codes or text descriptors (P < .05). The lowest percentage of incorrect answers was observed when the nutrient level was indicated using color code and text descriptors (P < .05). The combination of color codes and text descriptors seems to be the most effective alternative to increase attentional capture and understanding of nutritional information. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Foster, Tanina S.
2014-01-01
Introduction: Observational research using the thin slice technique has been routinely incorporated in observational research methods, however there is limited evidence supporting use of this technique compared to full interaction coding. The purpose of this study was to determine if this technique could be reliability coded, if ratings are…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Pengcheng; Mcclure, Mark; Shiozawa, Sogo
A series of experiments performed at the Fenton Hill hot dry rock site after stage 2 drilling of Phase I reservoir provided intriguing field observations on the reservoir’s responses to injection and venting under various conditions. Two teams participating in the US DOE Geothermal Technologies Office (GTO)’s Code Comparison Study (CCS) used different numerical codes to model these five experiments with the objective of inferring the hydraulic stimulation mechanism involved. The codes used by the two teams are based on different numerical principles, and the assumptions made were also different, due to intrinsic limitations in the codes and the modelers’more » personal interpretations of the field observations. Both sets of models were able to produce the most important field observations and both found that it was the combination of the vertical gradient of the fracture opening pressure, injection volume, and the use/absence of proppant that yielded the different outcomes of the five experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, R. E.
An accurate representation of the scattering of neutrons by the materials used to build cold sources at neutron scattering facilities is important for the initial design and optimization of a cold source, and for the analysis of experimental results obtained using the cold source. In practice, this requires a good representation of the physics of scattering from the material, a method to convert this into observable quantities (such as scattering cross sections), and a method to use the results in a neutron transport code (such as the MCNP Monte Carlo code). At Los Alamos, the authors have been developing thesemore » capabilities over the last ten years. The final set of cold-moderator evaluations, together with evaluations for conventional moderator materials, was released in 1994. These materials have been processed into MCNP data files using the NJOY Nuclear Data Processing System. Over the course of this work, they were able to develop a new module for NJOY called LEAPR based on the LEAP + ADDELT code from the UK as modified by D.J. Picton for cold-moderator calculations. Much of the physics for methane came from Picton`s work. The liquid hydrogen work was originally based on a code using the Young-Koppel approach that went through a number of hands in Europe (including Rolf Neef and Guy Robert). It was generalized and extended for LEAPR, and depends strongly on work by Keinert and Sax of the University of Stuttgart. Thus, their collection of cold-moderator scattering kernels is truly an international effort, and they are glad to be able to return the enhanced evaluations and processing techniques to the international community. In this paper, they give sections on the major cold moderator materials (namely, solid methane, liquid methane, and liquid hydrogen) using each section to introduce the relevant physics for that material and to show typical results.« less
Metastable neural dynamics mediates expectation
NASA Astrophysics Data System (ADS)
Mazzucato, Luca; La Camera, Giancarlo; Fontanini, Alfredo
Sensory stimuli are processed faster when their presentation is expected compared to when they come as a surprise. We previously showed that, in multiple single-unit recordings from alert rat gustatory cortex, taste stimuli can be decoded faster from neural activity if preceded by a stimulus-predicting cue. However, the specific computational process mediating this anticipatory neural activity is unknown. Here, we propose a biologically plausible model based on a recurrent network of spiking neurons with clustered architecture. In the absence of stimulation, the model neural activity unfolds through sequences of metastable states, each state being a population vector of firing rates. We modeled taste stimuli and cue (the same for all stimuli) as two inputs targeting subsets of excitatory neurons. As observed in experiment, stimuli evoked specific state sequences, characterized in terms of `coding states', i.e., states occurring significantly more often for a particular stimulus. When stimulus presentation is preceded by a cue, coding states show a faster and more reliable onset, and expected stimuli can be decoded more quickly than unexpected ones. This anticipatory effect is unrelated to changes of firing rates in stimulus-selective neurons and is absent in homogeneous balanced networks, suggesting that a clustered organization is necessary to mediate the expectation of relevant events. Our results demonstrate a novel mechanism for speeding up sensory coding in cortical circuits. NIDCD K25-DC013557 (LM); NIDCD R01-DC010389 (AF); NSF IIS-1161852 (GL).
Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports
Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.; ...
2017-05-03
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less
Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less
Super-resolution processing for multi-functional LPI waveforms
NASA Astrophysics Data System (ADS)
Li, Zhengzheng; Zhang, Yan; Wang, Shang; Cai, Jingxiao
2014-05-01
Super-resolution (SR) is a radar processing technique closely related to the pulse compression (or correlation receiver). There are many super-resolution algorithms developed for the improved range resolution and reduced sidelobe contaminations. Traditionally, the waveforms used for the SR have been either phase-coding (such as LKP3 code, Barker code) or the frequency modulation (chirp, or nonlinear frequency modulation). There are, however, an important class of waveforms which are either random in nature (such as random noise waveform), or randomly modulated for multiple function operations (such as the ADS-B radar signals in [1]). These waveforms have the advantages of low-probability-of-intercept (LPI). If the existing SR techniques can be applied to these waveforms, there will be much more flexibility for using these waveforms in actual sensing missions. Also, SR usually has great advantage that the final output (as estimation of ground truth) is largely independent of the waveform. Such benefits are attractive to many important primary radar applications. In this paper the general introduction of the SR algorithms are provided first, and some implementation considerations are discussed. The selected algorithms are applied to the typical LPI waveforms, and the results are discussed. It is observed that SR algorithms can be reliably used for LPI waveforms, on the other hand, practical considerations should be kept in mind in order to obtain the optimal estimation results.
Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic
NASA Technical Reports Server (NTRS)
Leucht, Kurt W.; Semmel, Glenn S.
2008-01-01
The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.
a Virtual Trip to the Schwarzschild-De Sitter Black Hole
NASA Astrophysics Data System (ADS)
Bakala, Pavel; Hledík, Stanislav; Stuchlík, Zdenĕk; Truparová, Kamila; Čermák, Petr
2008-09-01
We developed realistic fully general relativistic computer code for simulation of optical projection in a strong, spherically symmetric gravitational field. Standard theoretical analysis of optical projection for an observer in the vicinity of a Schwarzschild black hole is extended to black hole spacetimes with a repulsive cosmological constant, i.e, Schwarzschild-de Sitter (SdS) spacetimes. Influence of the cosmological constant is investigated for static observers and observers radially free-falling from static radius. Simulation includes effects of gravitational lensing, multiple images, Doppler and gravitational frequency shift, as well as the amplification of intensity. The code generates images of static observers sky and a movie simulations for radially free-falling observers. Techniques of parallel programming are applied to get high performance and fast run of the simulation code.
Bijective transformation circular codes and nucleotide exchanging RNA transcription.
Michel, Christian J; Seligmann, Hervé
2014-04-01
The C(3) self-complementary circular code X identified in genes of prokaryotes and eukaryotes is a set of 20 trinucleotides enabling reading frame retrieval and maintenance, i.e. a framing code (Arquès and Michel, 1996; Michel, 2012, 2013). Some mitochondrial RNAs correspond to DNA sequences when RNA transcription systematically exchanges between nucleotides (Seligmann, 2013a,b). We study here the 23 bijective transformation codes ΠX of X which may code nucleotide exchanging RNA transcription as suggested by this mitochondrial observation. The 23 bijective transformation codes ΠX are C(3) trinucleotide circular codes, seven of them are also self-complementary. Furthermore, several correlations are observed between the Reading Frame Retrieval (RFR) probability of bijective transformation codes ΠX and the different biological properties of ΠX related to their numbers of RNAs in GenBank's EST database, their polymerization rate, their number of amino acids and the chirality of amino acids they code. Results suggest that the circular code X with the functions of reading frame retrieval and maintenance in regular RNA transcription, may also have, through its bijective transformation codes ΠX, the same functions in nucleotide exchanging RNA transcription. Associations with properties such as amino acid chirality suggest that the RFR of X and its bijective transformations molded the origins of the genetic code's machinery. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Anguera, M. Teresa; Portell, Mariona; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana
2018-01-01
Indirect observation is a recent concept in systematic observation. It largely involves analyzing textual material generated either indirectly from transcriptions of audio recordings of verbal behavior in natural settings (e.g., conversation, group discussions) or directly from narratives (e.g., letters of complaint, tweets, forum posts). It may also feature seemingly unobtrusive objects that can provide relevant insights into daily routines. All these materials constitute an extremely rich source of information for studying everyday life, and they are continuously growing with the burgeoning of new technologies for data recording, dissemination, and storage. Narratives are an excellent vehicle for studying everyday life, and quantitization is proposed as a means of integrating qualitative and quantitative elements. However, this analysis requires a structured system that enables researchers to analyze varying forms and sources of information objectively. In this paper, we present a methodological framework detailing the steps and decisions required to quantitatively analyze a set of data that was originally qualitative. We provide guidelines on study dimensions, text segmentation criteria, ad hoc observation instruments, data quality controls, and coding and preparation of text for quantitative analysis. The quality control stage is essential to ensure that the code matrices generated from the qualitative data are reliable. We provide examples of how an indirect observation study can produce data for quantitative analysis and also describe the different software tools available for the various stages of the process. The proposed method is framed within a specific mixed methods approach that involves collecting qualitative data and subsequently transforming these into matrices of codes (not frequencies) for quantitative analysis to detect underlying structures and behavioral patterns. The data collection and quality control procedures fully meet the requirement of flexibility and provide new perspectives on data integration in the study of biopsychosocial aspects in everyday contexts. PMID:29441028
Human coding RNA editing is generally nonadaptive
Xu, Guixia; Zhang, Jianzhi
2014-01-01
Impairment of RNA editing at a handful of coding sites causes severe disorders, prompting the view that coding RNA editing is highly advantageous. Recent genomic studies have expanded the list of human coding RNA editing sites by more than 100 times, raising the question of how common advantageous RNA editing is. Analyzing 1,783 human coding A-to-G editing sites, we show that both the frequency and level of RNA editing decrease as the importance of a site or gene increases; that during evolution, edited As are more likely than unedited As to be replaced with Gs but not with Ts or Cs; and that among nonsynonymously edited As, those that are evolutionarily least conserved exhibit the highest editing levels. These and other observations reveal the overall nonadaptive nature of coding RNA editing, despite the presence of a few sites in which editing is clearly beneficial. We propose that most observed coding RNA editing results from tolerable promiscuous targeting by RNA editing enzymes, the original physiological functions of which remain elusive. PMID:24567376
Coastal Processes: Challenges for Monitoring and Prediction
2009-01-01
Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs (Unclassified/ Unlimited Only). Code 703o 4 Division, Code Author, Code n...Research Global and the Fondazione Cassa di Risparmio di La Spezia for the financial support provided for the conference and the special issue.
Binary video codec for data reduction in wireless visual sensor networks
NASA Astrophysics Data System (ADS)
Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias
2013-02-01
Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.
Deriving Word Order in Code-Switching: Feature Inheritance and Light Verbs
ERIC Educational Resources Information Center
Shim, Ji Young
2013-01-01
This dissertation investigates code-switching (CS), the concurrent use of more than one language in conversation, commonly observed in bilingual speech. Assuming that code-switching is subject to universal principles, just like monolingual grammar, the dissertation provides a principled account of code-switching, with particular emphasis on OV~VO…
Influence of Composition and Process Selection on Densification of Silicon Nitride.
1982-05-01
9 Accession For NTIS -GRA&I DTIC TAB F] Unannounced 0 Justificatio, Distribution/ Availability Codes Avail and/or Dist 1 Spec ial NI ...concerned with microstructural development and its influence on resultant properties of Si3 N4. Since the early observation that high alpha phase starting...pressed Si3N4 . Knoch and Gazza (2) subsequently investigated the influence of Si3 N4 starting powders with different alpha/beta phase content on the
Coupled-channel analyses on 16O + 147,148,150,152,154Sm heavy-ion fusion reactions
NASA Astrophysics Data System (ADS)
Erol, Burcu; Yılmaz, Ahmet Hakan
2018-02-01
Heavy-ion collisons are typically characterized by the presence of many open reaction channels. In the energies around the Coulomb barrier, the main processes are elastic scattering, inelastic excitations of low-lying modes and fusion operations of one or two nuclei. The fusion process is generally defined as the effect of one-dimensional barrier penetration model, taking scattering potential as the sum of Coulomb and proximity potential. We have performed heay-ion fusion reactions with coupled-channel (CC) calculations. Coupled-channel formalism is carried out under barrier energy in heavy-ion fusion reactions. In this work fusion cross sections have been calculated and analyzed in detail for the five systems 16O + 147,148,150,152,154sm in the framework of coupled-channel approach (using the codes CCFUS and CCDEF) and Wong Formula. Calculated results are compared with experimental data, CC calculations using code CCFULL and with the cross section datas taken from `nrv'. CCDEF, CCFULL and Wong Formula explains the fusion reactions of heavy-ions very well, while using the scattering potential as WOODS-SAXON volume potential with Akyuz-Winther parameters. It was observed that AW potential parameters are able to reproduce the experimentally observed fusion cross sections reasonably well for these systems. There is a good agreement between the calculated results with the experimental and nrv[8] results.
Exclusively visual analysis of classroom group interactions
NASA Astrophysics Data System (ADS)
Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric
2016-12-01
Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data only—without audio—as when using both visual and audio data to code. Also, interrater reliability is high when comparing use of visual and audio data to visual-only data. We see a small bias to code interactions as group discussion when visual and audio data are used compared with video-only data. This work establishes that meaningful educational observation can be made through visual information alone. Further, it suggests that after initial work to create a coding scheme and validate it in each environment, computer-automated visual coding could drastically increase the breadth of qualitative studies and allow for meaningful educational analysis on a far greater scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helfenbein, Kevin G.; Brown, Wesley M.; Boore, Jeffrey L.
We have sequenced the complete mitochondrial DNA (mtDNA) of the articulate brachiopod Terebratalia transversa. The circular genome is 14,291 bp in size, relatively small compared to other published metazoan mtDNAs. The 37 genes commonly found in animal mtDNA are present; the size decrease is due to the truncation of several tRNA, rRNA, and protein genes, to some nucleotide overlaps, and to a paucity of non-coding nucleotides. Although the gene arrangement differs radically from those reported for other metazoans, some gene junctions are shared with two other articulate brachiopods, Laqueus rubellus and Terebratulina retusa. All genes in the T. transversa mtDNA,more » unlike those in most metazoan mtDNAs reported, are encoded by the same strand. The A+T content (59.1 percent) is low for a metazoan mtDNA, and there is a high propensity for homopolymer runs and a strong base-compositional strand bias. The coding strand is quite G+T-rich, a skew that is shared by the confamilial (laqueid) specie s L. rubellus, but opposite to that found in T. retusa, a cancellothyridid. These compositional skews are strongly reflected in the codon usage patterns and the amino acid compositions of the mitochondrial proteins, with markedly different usage observed between T. retusa and the two laqueids. This observation, plus the similarity of the laqueid non-coding regions to the reverse complement of the non-coding region of the cancellothyridid, suggest that an inversion that resulted in a reversal in the direction of first-strand replication has occurred in one of the two lineages. In addition to the presence of one non-coding region in T. transversa that is comparable to those in the other brachiopod mtDNAs, there are two others with the potential to form secondary structures; one or both of these may be involved in the process of transcript cleavage.« less
Developing an ethical code for engineers: the discursive approach.
Lozano, J Félix
2006-04-01
From the Hippocratic Oath on, deontological codes and other professional self-regulation mechanisms have been used to legitimize and identify professional groups. New technological challenges and, above all, changes in the socioeconomic environment require adaptable codes which can respond to new demands. We assume that ethical codes for professionals should not simply focus on regulative functions, but must also consider ideological and educative functions. Any adaptations should take into account both contents (values, norms and recommendations) and the drafting process itself. In this article we propose a process for developing a professional ethical code for an official professional association (Colegio Oficial de Ingenieros Industriales de Valencia (COIIV) starting from the philosophical assumptions of discursive ethics but adapting them to critical hermeneutics. Our proposal is based on the Integrity Approach rather than the Compliance Approach. A process aiming to achieve an effective ethical document that fulfils regulative and ideological functions requires a participative, dialogical and reflexive methodology. This process must respond to moral exigencies and demands for efficiency and professional effectiveness. In addition to the methodological proposal we present our experience of producing an ethical code for the industrial engineers' association in Valencia (Spain) where this methodology was applied, and we evaluate the detected problems and future potential.
19 CFR 142.45 - Use of bar code by entry filer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Use of bar code by entry filer. 142.45 Section 142... THE TREASURY (CONTINUED) ENTRY PROCESS Line Release § 142.45 Use of bar code by entry filer. (a... with instructions from the port director, shall preprint invoices with the C-4 Code in bar code and...
2014-09-30
underwater acoustic communication technologies for autonomous distributed underwater networks , through innovative signal processing, coding, and...4. TITLE AND SUBTITLE Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and...coding: 3) OFDM modulated dynamic coded cooperation in underwater acoustic channels; 3 Localization, Networking , and Testbed: 4) On-demand
Code-Phase Clock Bias and Frequency Offset in PPP Clock Solutions.
Defraigne, Pascale; Sleewaegen, Jean-Marie
2016-07-01
Precise point positioning (PPP) is a zero-difference single-station technique that has proved to be very effective for time and frequency transfer, enabling the comparison of atomic clocks with a precision of a hundred picoseconds and a one-day stability below the 1e-15 level. It was, however, noted that for some receivers, a frequency difference is observed between the clock solution based on the code measurements and the clock solution based on the carrier-phase measurements. These observations reveal some inconsistency either between the code and carrier phases measured by the receiver or between the data analysis strategy of codes and carrier phases. One explanation for this discrepancy is the time offset that can exist for some receivers between the code and the carrier-phase latching. This paper explains how a code-phase bias in the receiver hardware can induce a frequency difference between the code and the carrier-phase clock solutions. The impact on PPP is then quantified. Finally, the possibility to determine this code-phase bias in the PPP modeling is investigated, and the first results are shown to be inappropriate due to the high level of code noise.
Spike Code Flow in Cultured Neuronal Networks.
Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei
2016-01-01
We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.
Combination of GPS and GLONASS IN PPP algorithms and its effect on site coordinates determination
NASA Astrophysics Data System (ADS)
Hefty, J.; Gerhatova, L.; Burgan, J.
2011-10-01
Precise Point Positioning (PPP) approach using the un-differenced code and phase GPS observations, precise orbits and satellite clocks is an important alternative to the analyses based on double differences. We examine the extension of the PPP method by introducing the GLONASS satellites into the processing algorithms. The procedures are demonstrated on the software package ABSOLUTE developed at the Slovak University of Technology. Partial results, like ambiguities and receiver clocks obtained from separate solutions of the two GNSS are mutually compared. Finally, the coordinate time series from combination of GPS and GLONASS observations are compared with GPS-only solutions.
Methods for Constructing and Assessing Propensity Scores
Garrido, Melissa M; Kelley, Amy S; Paris, Julia; Roza, Katherine; Meier, Diane E; Morrison, R Sean; Aldridge, Melissa D
2014-01-01
Objectives To model the steps involved in preparing for and carrying out propensity score analyses by providing step-by-step guidance and Stata code applied to an empirical dataset. Study Design Guidance, Stata code, and empirical examples are given to illustrate (1) the process of choosing variables to include in the propensity score; (2) balance of propensity score across treatment and comparison groups; (3) balance of covariates across treatment and comparison groups within blocks of the propensity score; (4) choice of matching and weighting strategies; (5) balance of covariates after matching or weighting the sample; and (6) interpretation of treatment effect estimates. Empirical Application We use data from the Palliative Care for Cancer Patients (PC4C) study, a multisite observational study of the effect of inpatient palliative care on patient health outcomes and health services use, to illustrate the development and use of a propensity score. Conclusions Propensity scores are one useful tool for accounting for observed differences between treated and comparison groups. Careful testing of propensity scores is required before using them to estimate treatment effects. PMID:24779867
[Long non-coding RNAs in the pathophysiology of atherosclerosis].
Novak, Jan; Vašků, Julie Bienertová; Souček, Miroslav
2018-01-01
The human genome contains about 22 000 protein-coding genes that are transcribed to an even larger amount of messenger RNAs (mRNA). Interestingly, the results of the project ENCODE from 2012 show, that despite up to 90 % of our genome being actively transcribed, protein-coding mRNAs make up only 2-3 % of the total amount of the transcribed RNA. The rest of RNA transcripts is not translated to proteins and that is why they are referred to as "non-coding RNAs". Earlier the non-coding RNA was considered "the dark matter of genome", or "the junk", whose genes has accumulated in our DNA during the course of evolution. Today we already know that non-coding RNAs fulfil a variety of regulatory functions in our body - they intervene into epigenetic processes from chromatin remodelling to histone methylation, or into the transcription process itself, or even post-transcription processes. Long non-coding RNAs (lncRNA) are one of the classes of non-coding RNAs that have more than 200 nucleotides in length (non-coding RNAs with less than 200 nucleotides in length are called small non-coding RNAs). lncRNAs represent a widely varied and large group of molecules with diverse regulatory functions. We can identify them in all thinkable cell types or tissues, or even in an extracellular space, which includes blood, specifically plasma. Their levels change during the course of organogenesis, they are specific to different tissues and their changes also occur along with the development of different illnesses, including atherosclerosis. This review article aims to present lncRNAs problematics in general and then focuses on some of their specific representatives in relation to the process of atherosclerosis (i.e. we describe lncRNA involvement in the biology of endothelial cells, vascular smooth muscle cells or immune cells), and we further describe possible clinical potential of lncRNA, whether in diagnostics or therapy of atherosclerosis and its clinical manifestations.Key words: atherosclerosis - lincRNA - lncRNA - MALAT - MIAT.
Valenzuela-Miranda, Diego; Gallardo-Escárate, Cristian
2016-12-01
Despite the high prevalence and impact to Chilean salmon aquaculture of the intracellular bacterium Piscirickettsia salmonis, the molecular underpinnings of host-pathogen interactions remain unclear. Herein, the interplay of coding and non-coding transcripts has been proposed as a key mechanism involved in immune response. Therefore, the aim of this study was to evidence how coding and non-coding transcripts are modulated during the infection process of Atlantic salmon with P. salmonis. For this, RNA-seq was conducted in brain, spleen, and head kidney samples, revealing different transcriptional profiles according to bacterial load. Additionally, while most of the regulated genes annotated for diverse biological processes during infection, a common response associated with clathrin-mediated endocytosis and iron homeostasis was present in all tissues. Interestingly, while endocytosis-promoting factors and clathrin inductions were upregulated, endocytic receptors were mainly downregulated. Furthermore, the regulation of genes related to iron homeostasis suggested an intracellular accumulation of iron, a process in which heme biosynthesis/degradation pathways might play an important role. Regarding the non-coding response, 918 putative long non-coding RNAs were identified, where 425 were newly characterized for S. salar. Finally, co-localization and co-expression analyses revealed a strong correlation between the modulations of long non-coding RNAs and genes associated with endocytosis and iron homeostasis. These results represent the first comprehensive study of putative interplaying mechanisms of coding and non-coding RNAs during bacterial infection in salmonids. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Lee, Seong-Soo
1982-01-01
Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…
Dual Coding, Reasoning and Fallacies.
ERIC Educational Resources Information Center
Hample, Dale
1982-01-01
Develops the theory that a fallacy is not a comparison of a rhetorical text to a set of definitions but a comparison of one person's cognition with another's. Reviews Paivio's dual coding theory, relates nonverbal coding to reasoning processes, and generates a limited fallacy theory based on dual coding theory. (PD)
Rapid 3D bioprinting from medical images: an application to bone scaffolding
NASA Astrophysics Data System (ADS)
Lee, Daniel Z.; Peng, Matthew W.; Shinde, Rohit; Khalid, Arbab; Hong, Abigail; Pennacchi, Sara; Dawit, Abel; Sipzner, Daniel; Udupa, Jayaram K.; Rajapakse, Chamith S.
2018-03-01
Bioprinting of tissue has its applications throughout medicine. Recent advances in medical imaging allows the generation of 3-dimensional models that can then be 3D printed. However, the conventional method of converting medical images to 3D printable G-Code instructions has several limitations, namely significant processing time for large, high resolution images, and the loss of microstructural surface information from surface resolution and subsequent reslicing. We have overcome these issues by creating a JAVA program that skips the intermediate triangularization and reslicing steps and directly converts binary dicom images into G-Code. In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each. We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.
Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma
NASA Astrophysics Data System (ADS)
Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.
2017-10-01
For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that this input can be provided reliably by the NINJA code.
Radiation Modeling for the Reentry of the Hayabusa Sample Return Capsule
NASA Technical Reports Server (NTRS)
Winter, Michael W.; McDaniel, Ryan D.; Chen, Yih-Kang; Liu, Yen; Saunders, David; Jenniskens, Petrus
2011-01-01
Predicted shock-layer emission signatures of the Japanese Hayabusa capsule during its reentry are presented for comparison with flight measurements made during an airborne observation mission using NASA s DC-8 Airborne Laboratory. For each altitude, lines of sight were extracted from flow field solutions computed using an inhouse high-fidelity CFD code, DPLR, at 11 points along the flight trajectory of the capsule. These lines of sight were used as inputs for the line-by-line radiation code NEQAIR, and emission spectra of the air plasma were computed in the wavelength range from 300 nm to 1600 nm, a range which covers all of the different experiments onboard the DC-8. In addition, the computed flow field solutions were post-processed with the material thermal response code FIAT, and the resulting surface temperatures of the heat shield were used to generate thermal emission spectra based on Planck radiation. Both spectra were summed and integrated over the flow field. The resulting emission at each trajectory point was propagated to the DC-8 position and transformed into incident irradiance. Comparisons with experimental data are shown.
The Multitheoretical List of Therapeutic Interventions - 30 items (MULTI-30).
Solomonov, Nili; McCarthy, Kevin S; Gorman, Bernard S; Barber, Jacques P
2018-01-16
To develop a brief version of the Multitheoretical List of Therapeutic Interventions (MULTI-60) in order to decrease completion time burden by approximately half, while maintaining content coverage. Study 1 aimed to select 30 items. Study 2 aimed to examine the reliability and internal consistency of the MULTI-30. Study 3 aimed to validate the MULTI-30 and ensure content coverage. In Study 1, the sample included 186 therapist and 255 patient MULTI ratings, and 164 ratings of sessions coded by trained observers. Internal consistency (Chronbach's alpha and McDonald's omega) was calculated and confirmatory factor analysis was conducted. Psychotherapy experts rated content relevance. Study 2 included a sample of 644 patient and 522 therapist ratings, and 793 codings of psychotherapy sessions. In Study 3, the sample included 33 codings of sessions. A series of regression analyses was conducted to examine replication of previously published findings using the MULTI-30. The MULTI-30 was found valid, reliable, and internally consistent across 2564 ratings examined across the three studies presented. The MULTI-30 a brief and reliable process measure. Future studies are required for further validation.
2010-01-01
Background Adenosine to inosine (A-to-I) RNA-editing is an essential post-transcriptional mechanism that occurs in numerous sites in the human transcriptome, mainly within Alu repeats. It has been shown to have consistent levels of editing across individuals in a few targets in the human brain and altered in several human pathologies. However, the variability across human individuals of editing levels in other tissues has not been studied so far. Results Here, we analyzed 32 skin samples, looking at A-to-I editing level in three genes within coding sequences and in the Alu repeats of six different genes. We observed highly consistent editing levels across different individuals as well as across tissues, not only in coding targets but, surprisingly, also in the non evolutionary conserved Alu repeats. Conclusions Our findings suggest that A-to-I RNA-editing of Alu elements is a tightly regulated process and, as such, might have been recruited in the course of primate evolution for post-transcriptional regulatory mechanisms. PMID:21029430
Naville, M; Warren, I A; Haftek-Terreau, Z; Chalopin, D; Brunet, F; Levin, P; Galiana, D; Volff, J-N
2016-04-01
Viruses and transposable elements, once considered as purely junk and selfish sequences, have repeatedly been used as a source of novel protein-coding genes during the evolution of most eukaryotic lineages, a phenomenon called 'molecular domestication'. This is exemplified perfectly in mammals and other vertebrates, where many genes derived from long terminal repeat (LTR) retroelements (retroviruses and LTR retrotransposons) have been identified through comparative genomics and functional analyses. In particular, genes derived from gag structural protein and envelope (env) genes, as well as from the integrase-coding and protease-coding sequences, have been identified in humans and other vertebrates. Retroelement-derived genes are involved in many important biological processes including placenta formation, cognitive functions in the brain and immunity against retroelements, as well as in cell proliferation, apoptosis and cancer. These observations support an important role of retroelement-derived genes in the evolution and diversification of the vertebrate lineage. Copyright © 2016 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Jacques-Tiura, Angela J; Carcone, April Idalski; Naar, Sylvie; Brogan Hartlieb, Kathryn; Albrecht, Terrance L; Barton, Ellen
2017-03-01
We sought to examine communication between counselors and caregivers of adolescents with obesity to determine what types of counselor behaviors increased caregivers' motivational statements regarding supporting their child's weight loss. We coded 20-min Motivational Interviewing sessions with 37 caregivers of African American 12-16-year-olds using the Minority Youth Sequential Coding for Observing Process Exchanges. We used sequential analysis to determine which counselor communication codes predicted caregiver motivational statements. Counselors' questions to elicit motivational statements and emphasis on autonomy increased the likelihood of both caregiver change talk and commitment language statements. Counselors' reflections of change talk predicted further change talk, and reflections of commitment language predicted more commitment language. When working to increase motivation among caregivers of adolescents with overweight or obesity, providers should strive to reflect motivational statements, ask questions to elicit motivational statements, and emphasize caregivers' autonomy. © The Author 2016. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
Colour Coding of Maps for Colour Deficient Observers.
Røise, Anne Kari; Kvitle, Anne Kristin; Green, Phil
2016-01-01
We evaluate the colour coding of a web map traffic information service based on profiles simulating colour vision deficiencies. Based on these simulations and principles for universal design, we propose adjustments of the existing colours creating more readable maps for the colour vision deficient observers.
NASA Astrophysics Data System (ADS)
Horký, Miroslav; Omura, Yoshiharu; Santolík, Ondřej
2018-04-01
This paper presents the wave mode conversion between electrostatic and electromagnetic waves on the plasma density gradient. We use 2-D electromagnetic code KEMPO2 implemented with the generation of density gradient to simulate such a conversion process. In the dense region, we use ring beam instability to generate electron Bernstein waves and we study the temporal evolution of wave spectra, velocity distributions, Poynting flux, and electric and magnetic energies to observe the wave mode conversion. Such a conversion process can be a source of electromagnetic emissions which are routinely measured by spacecraft on the plasmapause density gradient.
Hyper-Spectral Synthesis of Active OB Stars Using GLaDoS
NASA Astrophysics Data System (ADS)
Hill, N. R.; Townsend, R. H. D.
2016-11-01
In recent years there has been considerable interest in using graphics processing units (GPUs) to perform scientific computations that have traditionally been handled by central processing units (CPUs). However, there is one area where the scientific potential of GPUs has been overlooked - computer graphics, the task they were originally designed for. Here we introduce GLaDoS, a hyper-spectral code which leverages the graphics capabilities of GPUs to synthesize spatially and spectrally resolved images of complex stellar systems. We demonstrate how GLaDoS can be applied to calculate observables for various classes of stars including systems with inhomogenous surface temperatures and contact binaries.
Nimbus/TOMS Science Data Operations Support
NASA Technical Reports Server (NTRS)
Childs, Jeff
1998-01-01
1. Participate in and provide analysis of laboratory and in-flight calibration of UV sensors used for space observations of backscattered UV radiation. 2. Provide support to the TOMS Science Operations Center, including generating instrument command lists and analysis of TOMS health and safety data. 3. Develop and maintain software and algorithms designed to capture and process raw spacecraft and instrument data, convert the instrument output into measured radiance and irradiances, and produce scientifically valid products. 4. Process the TOMS data into Level 1, Level 2, and Level 3 data products. 5. Provide analysis of the science data products in support of NASA GSFC Code 916's research.
Nimbus/TOMS Science Data Operations Support
NASA Technical Reports Server (NTRS)
1998-01-01
Projected goals include the following: (1) Participate in and provide analysis of laboratory and in-flight calibration of LTV sensors used for space observations of backscattered LTV radiation; (2) Provide support to the TOMS Science Operations Center, including generating instrument command lists and analysis of TOMS health and safety data; (3) Develop and maintain software and algorithms designed to capture and process raw spacecraft and instrument data, convert the instrument output into measured radiance and irradiances, and produce scientifically valid products; (4) Process the TOMS data into Level 1, Level 2, and Level 3 data products; (5) Provide analysis of the science data products in support of NASA GSFC Code 916's research.
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min
2018-03-01
Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.
Self-Regulation in Broadcasting Revisited.
ERIC Educational Resources Information Center
Linton, Bruce A.
1987-01-01
Discusses the self-regulatory processes of the broadcast industry as related to advertising and programing standards after the elimination of the National Association of Broadcasters (NAB) "Code." Asserts that, even though the code is gone, the process of self-regulation continues. (MM)
Pricise Target Geolocation and Tracking Based on Uav Video Imagery
NASA Astrophysics Data System (ADS)
Hosseinpoor, H. R.; Samadzadegan, F.; Dadrasjavan, F.
2016-06-01
There is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this study compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation.
NASA Astrophysics Data System (ADS)
Schartmann, M.; Meisenheimer, K.; Klahr, H.; Camenzind, M.; Wolf, S.; Henning, Th.
Recently, the MID-infrared Interferometric instrument (MIDI) at the VLTI has shown that dust tori in the two nearby Seyfert galaxies NGC 1068 and the Circinus galaxy are geometrically thick and can be well described by a thin, warm central disk, surrounded by a colder and fluffy torus component. By carrying out hydrodynamical simulations with the help of the TRAMP code \\citep{schartmann_Klahr_99}, we follow the evolution of a young nuclear star cluster in terms of discrete mass-loss and energy injection from stellar processes. This naturally leads to a filamentary large scale torus component, where cold gas is able to flow radially inwards. The filaments open out into a dense and very turbulent disk structure. In a post-processing step, we calculate observable quantities like spectral energy distributions or images with the help of the 3D radiative transfer code MC3D \\citep{schartmann_Wolf_03}. Good agreement is found in comparisons with data due to the existence of almost dust-free lines of sight through the large scale component and the large column densities caused by the dense disk.
Overview of the Aeroelastic Prediction Workshop
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Chwalowski, Pawel; Florance, Jennifer P.; Wieseman, Carol D.; Schuster, David M.; Perry, Raleigh B.
2013-01-01
The Aeroelastic Prediction Workshop brought together an international community of computational fluid dynamicists as a step in defining the state of the art in computational aeroelasticity. This workshop's technical focus was prediction of unsteady pressure distributions resulting from forced motion, benchmarking the results first using unforced system data. The most challenging aspects of the physics were identified as capturing oscillatory shock behavior, dynamic shock-induced separated flow and tunnel wall boundary layer influences. The majority of the participants used unsteady Reynolds-averaged Navier Stokes codes. These codes were exercised at transonic Mach numbers for three configurations and comparisons were made with existing experimental data. Substantial variations were observed among the computational solutions as well as differences relative to the experimental data. Contributing issues to these differences include wall effects and wall modeling, non-standardized convergence criteria, inclusion of static aeroelastic deflection, methodology for oscillatory solutions, post-processing methods. Contributing issues pertaining principally to the experimental data sets include the position of the model relative to the tunnel wall, splitter plate size, wind tunnel expansion slot configuration, spacing and location of pressure instrumentation, and data processing methods.
Ohneck, Emily J.; Arivett, Brock A.; Fiester, Steven E.; Wood, Cecily R.; Metz, Maeva L.; Simeone, Gabriella M.
2018-01-01
The capacity of Acinetobacter baumannii to persist and cause infections depends on its interaction with abiotic and biotic surfaces, including those found on medical devices and host mucosal surfaces. However, the extracellular stimuli affecting these interactions are poorly understood. Based on our previous observations, we hypothesized that mucin, a glycoprotein secreted by lung epithelial cells, particularly during respiratory infections, significantly alters A. baumannii’s physiology and its interaction with the surrounding environment. Biofilm, virulence and growth assays showed that mucin enhances the interaction of A. baumannii ATCC 19606T with abiotic and biotic surfaces and its cytolytic activity against epithelial cells while serving as a nutrient source. The global effect of mucin on the physiology and virulence of this pathogen is supported by RNA-Seq data showing that its presence in a low nutrient medium results in the differential transcription of 427 predicted protein-coding genes. The reduced expression of ion acquisition genes and the increased transcription of genes coding for energy production together with the detection of mucin degradation indicate that this host glycoprotein is a nutrient source. The increased expression of genes coding for adherence and biofilm biogenesis on abiotic and biotic surfaces, the degradation of phenylacetic acid and the production of an active type VI secretion system further supports the role mucin plays in virulence. Taken together, our observations indicate that A. baumannii recognizes mucin as an environmental signal, which triggers a response cascade that allows this pathogen to acquire critical nutrients and promotes host-pathogen interactions that play a role in the pathogenesis of bacterial infections. PMID:29309434
The Evolution of Bony Vertebrate Enhancers at Odds with Their Coding Sequence Landscape.
Yousaf, Aisha; Sohail Raza, Muhammad; Ali Abbasi, Amir
2015-08-06
Enhancers lie at the heart of transcriptional and developmental gene regulation. Therefore, changes in enhancer sequences usually disrupt the target gene expression and result in disease phenotypes. Despite the well-established role of enhancers in development and disease, evolutionary sequence studies are lacking. The current study attempts to unravel the puzzle of bony vertebrates' conserved noncoding elements (CNE) enhancer evolution. Bayesian phylogenetics of enhancer sequences spotlights promising interordinal relationships among placental mammals, proposing a closer relationship between humans and laurasiatherians while placing rodents at the basal position. Clock-based estimates of enhancer evolution provided a dynamic picture of interspecific rate changes across the bony vertebrate lineage. Moreover, coelacanth in the study augmented our appreciation of the vertebrate cis-regulatory evolution during water-land transition. Intriguingly, we observed a pronounced upsurge in enhancer evolution in land-dwelling vertebrates. These novel findings triggered us to further investigate the evolutionary trend of coding as well as CNE nonenhancer repertoires, to highlight the relative evolutionary dynamics of diverse genomic landscapes. Surprisingly, the evolutionary rates of enhancer sequences were clearly at odds with those of the coding and the CNE nonenhancer sequences during vertebrate adaptation to land, with land vertebrates exhibiting significantly reduced rates of coding sequence evolution in comparison to their fast evolving regulatory landscape. The observed variation in tetrapod cis-regulatory elements caused the fine-tuning of associated gene regulatory networks. Therefore, the increased evolutionary rate of tetrapods' enhancer sequences might be responsible for the variation in developmental regulatory circuits during the process of vertebrate adaptation to land. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Surface hardening using cw CO2 laser: laser heat treatment, modelation, and experimental work
NASA Astrophysics Data System (ADS)
Muniz, German; Alum, Jorge
1996-02-01
In the present work are given the results of the application of laser metal surface hardening techniques using a cw carbon dioxide laser as an energy source on steel 65 G. The laser heat treatment results are presented theoretically and experimentally. Continuous wave carbon dioxide laser of 0.6, 0.3, and 0.4 kW were used. A physical model for the descriptions of the thermophysical laser metal interactions process is given and a numerical algorithm is used to solve this problem by means of the LHT code. The results are compared with the corresponding experimental ones and a very good agreement is observed. The LHT code is able to do predictions of transformation hardening by laser heating. These results will be completed with other ones concerning laser alloying and cladding presented in a second paper.
Modelling of 13CH4 injection and local carbon deposition at the outer divertor of ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Aho-Mantila, L.; Airila, M. I.; Wischmeier, M.; Krieger, K.; Pugno, R.; Coster, D. P.; Chankin, A. V.; Neu, R.; Rohde, V.
2009-12-01
Numerical modelling of 13CH4 injection into the outer divertor plasma of the full tungsten, vertical target of ASDEX Upgrade is presented. The SOLPS5.0 code package is used to calculate a realistic scrape-off layer plasma background corresponding to L-mode discharges in the attached divertor plasma regime. The ERO code is then used for detailed modelling of the hydrocarbon break-up, re-deposition and re-erosion processes. The deposition patterns observed at two different poloidal locations are shown to strongly reflect the cross-field gradients in divertor plasma density and temperature, as well as the local plasma collisionality. Experimental results with forward and reversed BT, accompanied by numerical modelling, also point towards a significant poloidal hydrocarbon E×B drift in the divertor region.
Atomic Processes in X-ray Photoioinzed Gas
NASA Technical Reports Server (NTRS)
Kallman, Timothy
2005-01-01
It has long been known that photoionization and photoabsorption play a dominant role in determining the state of gas in nebulae surrounding hot stars and in active galaxies. Recent observations of X-ray spectra demonstrate that these processes are also dominant in highly ionized gas near compact objects, and also affect the transmission of X-rays from the majority of astronomical sources. This has led to new insights into the understanding of what is going on in these sources. It has also pointed out the need for accurate atomic cross sections for photoionization and absorption, notably for processes involving inner shells. The xstar code can be used for calculating the heating, ionization and reprocessing of X-rays by gas in a range of ionization states and temperatures. It has recently been updated to include an improved treatment of inner shell transitions in iron. I will review the capabilities of xstar, the atomic data, and illustrate some applications to recent X-ray spectral observations.
A three-dimensional code for muon propagation through the rock: MUSIC
NASA Astrophysics Data System (ADS)
Antonioli, P.; Ghetti, C.; Korolkova, E. V.; Kudryavtsev, V. A.; Sartorelli, G.
1997-10-01
We present a new three-dimensional Monte-Carlo code MUSIC (MUon SImulation Code) for muon propagation through the rock. All processes of muon interaction with matter with high energy loss (including the knock-on electron production) are treated as stochastic processes. The angular deviation and lateral displacement of muons due to multiple scattering, as well as bremsstrahlung, pair production and inelastic scattering are taken into account. The code has been applied to obtain the energy distribution and angular and lateral deviations of single muons at different depths underground. The muon multiplicity distributions obtained with MUSIC and CORSIKA (Extensive Air Shower simulation code) are also presented. We discuss the systematic uncertainties of the results due to different muon bremsstrahlung cross-sections.
The "periodic table" of the genetic code: A new way to look at the code and the decoding process.
Komar, Anton A
2016-01-01
Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.
Nonlinear Real-Time Optical Signal Processing.
1988-07-01
Principal Investigator B. K. Jenkins Signal and Image Processing Institute University of Southern California Mail Code 0272 Los Angeles, California...ADDRESS (09% SteW. Mnd ZIP Code ) 10. SOURC OF FUNONG NUMBERS Bldg. 410, Bolling AFB PROGAM CT TASK WORK UNIT Washington, D.C. 20332 EEETP.aso o 11...TAB Unmnnncced Justification By Distribution/ I O’ Availablility Codes I - ’_ ji and/or 2 I Summary During the period 1 July 1987 - 30 June 1988, the
Power Aware Signal Processing Environment (PASPE) for PAC/C
2003-02-01
vs. FFT Size For our implementation , the Annapolis FFT core was radix-256, and therefore the smallest PN code length that could be processed was the...PN-64. A C- code version of correlate was compared to the FPGA 61 implementation . The results in Figure 68 show that for a PN-1024, the...12a. DISTRIBUTION / AVAILABILITY STATEMENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum
Auditory-neurophysiological responses to speech during early childhood: Effects of background noise
White-Schwoch, Travis; Davies, Evan C.; Thompson, Elaine C.; Carr, Kali Woodruff; Nicol, Trent; Bradlow, Ann R.; Kraus, Nina
2015-01-01
Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But learning rarely occurs under ideal listening conditions—children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3–5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features—even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood. PMID:26113025
HCPCS Coding: An Integral Part of Your Reimbursement Strategy.
Nusgart, Marcia
2013-12-01
The first step to a successful reimbursement strategy is to ensure that your wound care product has the most appropriate Healthcare Common Procedure Coding System (HCPCS) code (or billing) for your product. The correct HCPCS code plays an essential role in patient access to new and existing technologies. When devising a strategy to obtain a HCPCS code for its product, companies must consider a number of factors as follows: (1) Has the product gone through the Food and Drug Administration (FDA) regulatory process or does it need to do so? Will the FDA code designation impact which HCPCS code will be assigned to your product? (2) In what "site of service" do you intend to market your product? Where will your customers use the product? Which coding system (CPT ® or HCPCS) applies to your product? (3) Does a HCPCS code for a similar product already exist? Does your product fit under the existing HCPCS code? (4) Does your product need a new HCPCS code? What is the linkage, if any, between coding, payment, and coverage for the product? Researchers and companies need to start early and place the same emphasis on a reimbursement strategy as it does on a regulatory strategy. Your reimbursement strategy staff should be involved early in the process, preferably during product research and development and clinical trial discussions.
Signal-processing theory for the TurboRogue receiver
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1995-01-01
Signal-processing theory for the TurboRogue receiver is presented. The signal form is traced from its formation at the GPS satellite, to the receiver antenna, and then through the various stages of the receiver, including extraction of phase and delay. The analysis treats the effects of ionosphere, troposphere, signal quantization, receiver components, and system noise, covering processing in both the 'code mode' when the P code is not encrypted and in the 'P-codeless mode' when the P code is encrypted. As a possible future improvement to the current analog front end, an example of a highly digital front end is analyzed.
NASA Astrophysics Data System (ADS)
Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.
2016-02-01
The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.
When Content Matters: The Role of Processing Code in Tactile Display Design.
Ferris, Thomas K; Sarter, Nadine
2010-01-01
The distribution of tasks and stimuli across multiple modalities has been proposed as a means to support multitasking in data-rich environments. Recently, the tactile channel and, more specifically, communication via the use of tactile/haptic icons have received considerable interest. Past research has examined primarily the impact of concurrent task modality on the effectiveness of tactile information presentation. However, it is not well known to what extent the interpretation of iconic tactile patterns is affected by another attribute of information: the information processing codes of concurrent tasks. In two driving simulation studies (n = 25 for each), participants decoded icons composed of either spatial or nonspatial patterns of vibrations (engaging spatial and nonspatial processing code resources, respectively) while concurrently interpreting spatial or nonspatial visual task stimuli. As predicted by Multiple Resource Theory, performance was significantly worse (approximately 5-10 percent worse) when the tactile icons and visual tasks engaged the same processing code, with the overall worst performance in the spatial-spatial task pairing. The findings from these studies contribute to an improved understanding of information processing and can serve as input to multidimensional quantitative models of timesharing performance. From an applied perspective, the results suggest that competition for processing code resources warrants consideration, alongside other factors such as the naturalness of signal-message mapping, when designing iconic tactile displays. Nonspatially encoded tactile icons may be preferable in environments which already rely heavily on spatial processing, such as car cockpits.
Automation of the Lowell Observatory 0.8-m Telescope
NASA Astrophysics Data System (ADS)
Buie, M. W.
2001-11-01
In the past year I have converted the Lowell Observatory 0.8-m telescope from a classically scheduled and operated telescope to an automated facility. The new setup uses an existing CCD camera and the existing telescope control system. The key steps in the conversion were writing a new CCD control and data acquisition module plus writing communication and queue control software. The previous CCD control program was written for DOS and much of the code was reused for this project. The entire control system runs under Linux and consists of four daemons: MOVE, PCCD, CMDR, and PCTL. The MOVE daemon is a process that communciates with the telescope control system via an RS232 port, keeping track of its state and forwarding commands from other processes to the telescope. The PCCD daemon controls the CCD camera and collects data. The CMDR daemon maintains a FIFO queue of commands to be executed during the night. The PCTL daemon receives notification from any other deamon of execution failures and sends an error code to the on-duty observer via a numeric pager. This system runs through the night much as you would traditionally operate a telescope. However, this system permits queuing up all the commands for a night and they execute one after another in sequence. Additional commands are needed to replace the normal human interaction during observing (ie., target acquisition, field registration, focusing). Also, numerous temporal synchronization commands are required so that observations happen at the right time. The system was used for this year's photometric monitoring of Pluto and Triton and is in general use for 2/3 of time on the telescope. Pluto observations were collected on 30 nights out of a potential pool of 90 nights. Detailed system design and capabilites plus sample observations will be presented. Also, a live demonstration will be provided if the weather is good. This work was supported by NASA Grant NAG5-4210 and the NSF REU Program grant to NAU.
Infrared Spectroscopy of Star Formation in Galactic and Extragalactic Regions
NASA Technical Reports Server (NTRS)
Smith, Howard A.; Hasan, Hashima (Technical Monitor)
2004-01-01
Last year we submitted and had accepted a paper entitled "The Far-Infrared Emission Line and Continuum Spectrum of the Seyfert Galaxy NGC 1068," by Spinoglio, L., Malkan, M., Smith. HA, Gonzalez-Alfonso, E., and Fischer, J. This analysis was based on the SWAS Monte Carlo code modeling of the OH lines in galaxies observed by ISO. Since that meeting last spring considerable effort has been put into improving the Monte Carlo code. A group of European astronomers, including Prof. Eduardo Gonzalez-Alfonso, had been performing Monte Carlo modeling of other molecules seen in ISO galaxies. We used portions of this grant to bring Prof. Gonzalez-Alfonso to Cambridge for an intensive working visit. A second major paper on the ISO IR spectroscopy of galaxies, "The Far Infrared Spectrum of Arp 220," Gonzalez-Alfonso, E., Smith. H., Fischer, J., and Cernicharo, J., is in press. Spitzer science development was the major component of this past year;s research. This program supported the development of five Early Release Objects for Spitzer observations on which Dr. Smith was Principal Investigator or Co-Investigator, and another five proposals for GO time. The early release program is designed to rapidly present to the public and the scientific community some exciting results from Spitzer in the first months of its operation. The Spitzer instrument and science teams submitted proposals for ERO objects, and a competitive selection process narrowed these down to a small group with exciting science and realistic observational parameters. This grant supported Dr. Smith's participation in the ERO process, including developing science goals, identifying key objects for observation, and developing the detailed AOR (observing formulae) to be use by the instruments for mapping, integrating, etc.). During this year Dr. Smith worked on writing up and publishing these early results. The attached bibliography includes six of Dr. Smith's articles. During this past year Dr. Smith also led or helped to develop proposals for ten Spitzer GO Programs, and three others. Appendix B lists the programs involved.
NASA Astrophysics Data System (ADS)
Zhang, Chongfu; Qiu, Kun; Xu, Bo; Ling, Yun
2008-05-01
This paper proposes an all-optical label processing scheme that uses the multiple optical orthogonal codes sequences (MOOCS)-based optical label for optical packet switching (OPS) (MOOCS-OPS) networks. In this scheme, each MOOCS is a permutation or combination of the multiple optical orthogonal codes (MOOC) selected from the multiple-groups optical orthogonal codes (MGOOC). Following a comparison of different optical label processing (OLP) schemes, the principles of MOOCS-OPS network are given and analyzed. Firstly, theoretical analyses are used to prove that MOOCS is able to greatly enlarge the number of available optical labels when compared to the previous single optical orthogonal code (SOOC) for OPS (SOOC-OPS) network. Then, the key units of the MOOCS-based optical label packets, including optical packet generation, optical label erasing, optical label extraction and optical label rewriting etc., are given and studied. These results are used to verify that the proposed MOOCS-OPS scheme is feasible.
Force-Free Magnetic Fields Calculated from Automated Tracing of Coronal Loops with AIA/SDO
NASA Astrophysics Data System (ADS)
Aschwanden, M. J.
2013-12-01
One of the most realistic magnetic field models of the solar corona is a nonlinear force-free field (NLFFF) solution. There exist about a dozen numeric codes that compute NLFFF solutions based on extrapolations of photospheric vector magnetograph data. However, since the photosphere and lower chromosphere is not force-free, a suitable correction has to be applied to the lower boundary condition. Despite of such "pre-processing" corrections, the resulting theoretical magnetic field lines deviate substantially from observed coronal loop geometries. - Here we developed an alternative method that fits an analytical NLFFF approximation to the observed geometry of coronal loops. The 2D coordinates of the geometry of coronal loop structures observed with AIA/SDO are traced with the "Oriented Coronal CUrved Loop Tracing" (OCCULT-2) code, an automated pattern recognition algorithm that has demonstrated the fidelity in loop tracing matching visual perception. A potential magnetic field solution is then derived from a line-of-sight magnetogram observed with HMI/SDO, and an analytical NLFFF approximation is then forward-fitted to the twisted geometry of coronal loops. We demonstrate the performance of this magnetic field modeling method for a number of solar active regions, before and after major flares observed with SDO. The difference of the NLFFF and the potential field energies allows us then to compute the free magnetic energy, which is an upper limit of the energy that is released during a solar flare.
EOS MLS Level 1B Data Processing, Version 2.2
NASA Technical Reports Server (NTRS)
Perun, Vincent; Jarnot, Robert; Pickett, Herbert; Cofield, Richard; Schwartz, Michael; Wagner, Paul
2009-01-01
A computer program performs level- 1B processing (the term 1B is explained below) of data from observations of the limb of the Earth by the Earth Observing System (EOS) Microwave Limb Sounder (MLS), which is an instrument aboard the Aura spacecraft. This software accepts, as input, the raw EOS MLS scientific and engineering data and the Aura spacecraft ephemeris and attitude data. Its output consists of calibrated instrument radiances and associated engineering and diagnostic data. [This software is one of several computer programs, denoted product generation executives (PGEs), for processing EOS MLS data. Starting from level 0 (representing the aforementioned raw data, the PGEs and their data products are denoted by alphanumeric labels (e.g., 1B and 2) that signify the successive stages of processing.] At the time of this reporting, this software is at version 2.2 and incorporates improvements over a prior version that make the code more robust, improve calibration, provide more diagnostic outputs, improve the interface with the Level 2 PGE, and effect a 15-percent reduction in file sizes by use of data compression.
Image gathering and coding for digital restoration: Information efficiency and visual quality
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar
1989-01-01
Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.
Practical guide to bar coding for patient medication safety.
Neuenschwander, Mark; Cohen, Michael R; Vaida, Allen J; Patchett, Jeffrey A; Kelly, Jamie; Trohimovich, Barbara
2003-04-15
Bar coding for the medication administration step of the drug-use process is discussed. FDA will propose a rule in 2003 that would require bar-code labels on all human drugs and biologicals. Even with an FDA mandate, manufacturer procrastination and possible shifts in product availability are likely to slow progress. Such delays should not preclude health systems from adopting bar-code-enabled point-of-care (BPOC) systems to achieve gains in patient safety. Bar-code technology is a replacement for traditional keyboard data entry. The elements of bar coding are content, which determines the meaning; data format, which refers to the embedded data and symbology, which describes the "font" in which the machine-readable code is written. For a BPOC system to deliver an acceptable level of patient protection, the hospital must first establish reliable processes for a patient identification band, caregiver badge, and medication bar coding. Medications can have either drug-specific or patient-specific bar codes. Both varieties result in the desired code that supports patient's five rights of drug administration. When medications are not available from the manufacturer in immediate-container bar-coded packaging, other means of applying the bar code must be devised, including the use of repackaging equipment, overwrapping, manual bar coding, and outsourcing. Virtually all medications should be bar coded, the bar code on the label should be easily readable, and appropriate policies, procedures, and checks should be in place. Bar coding has the potential to be not only cost-effective but to produce a return on investment. By bar coding patient identification tags, caregiver badges, and immediate-container medications, health systems can substantially increase patient safety during medication administration.
Recent Improvements in the FDNS CFD Code and its Associated Process
NASA Technical Reports Server (NTRS)
West, Jeff S.; Dorney, Suzanne M.; Turner, Jim (Technical Monitor)
2002-01-01
This viewgraph presentation gives an overview on recent improvements in the Finite Difference Navier Stokes (FDNS) computational fluid dynamics (CFD) code and its associated process. The development of a utility, PreViewer, has essentially eliminated the creeping of simple human error into the FDNS Solution process. Extension of PreViewer to encapsulate the Domain Decompression process has made practical the routine use of parallel processing. The combination of CVS source control and ATS consistency validation significantly increases the efficiency of the CFD process.
The MINERVA Software Development Process
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.
2017-01-01
This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.
Radiation transport around Kerr black holes
NASA Astrophysics Data System (ADS)
Schnittman, Jeremy David
This Thesis describes the basic framework of a relativistic ray-tracing code for analyzing accretion processes around Kerr black holes. We begin in Chapter 1 with a brief historical summary of the major advances in black hole astrophysics over the past few decades. In Chapter 2 we present a detailed description of the ray-tracing code, which can be used to calculate the transfer function between the plane of the accretion disk and the detector plane, an important tool for modeling relativistically broadened emission lines. Observations from the Rossi X-Ray Timing Explorer have shown the existence of high frequency quasi-periodic oscillations (HFQPOs) in a number of black hole binary systems. In Chapter 3, we employ a simple "hot spot" model to explain the position and amplitude of these HFQPO peaks. The power spectrum of the periodic X-ray light curve consists of multiple peaks located at integral combinations of the black hole coordinate frequencies, with the relative amplitude of each peak determined by the orbital inclination, eccentricity, and hot spot arc length. In Chapter 4, we introduce additional features to the model to explain the broadening of the QPO peaks as well as the damping of higher frequency harmonics in the power spectrum. The complete model is used to fit the power spectra observed in XTE J1550-564, giving confidence limits on each of the model parameters. In Chapter 5 we present a description of the structure of a relativistic alpha- disk around a Kerr black hole. Given the surface temperature of the disk, the observed spectrum is calculated using the transfer function mentioned above. The features of this modified thermal spectrum may be used to infer the physical properties of the accretion disk and the central black hole. In Chapter 6 we develop a Monte Carlo code to calculate the detailed propagation of photons from a hot spot emitter scattering through a corona surrounding the black hole. The coronal scattering has two major observable effects: the inverse-Compton process alters the photon spectrum by adding a high energy power-law tail, and the random scattering of each photon effectively damps out the highest frequency modulations in the X-ray light curve. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617- 253-5668; Fax 617-253-1690.)
Kayenta Township Building & Safety Department, Tribal Green Building Code Summit Presentation
Tribal Green Building Code Summit Presentation by Kayenta Township Building & Safety Department showing how they established the building department, developed a code adoption and enforcement process, and hired staff to carry out the work.
Scanning for safety: an integrated approach to improved bar-code medication administration.
Early, Cynde; Riha, Chris; Martin, Jennifer; Lowdon, Karen W; Harvey, Ellen M
2011-03-01
This is a review of lessons learned in the postimplementation evaluation of a bar-code medication administration technology implemented at a major tertiary-care hospital in 2001. In 2006, with a bar-code medication administration scan compliance rate of 82%, a near-miss sentinel event prompted review of this technology as part of an institutional recommitment to a "culture of safety." Multifaceted problems with bar-code medication administration created an environment of circumventing safeguards as demonstrated by an increase in manual overrides to ensure timely medication administration. A multiprofessional team composed of nursing, pharmacy, human resources, quality, and technical services formalized. Each step in the bar-code medication administration process was reviewed. Technology, process, and educational solutions were identified and implemented systematically. Overall compliance with bar-code medication administration rose from 82% to 97%, which resulted in a calculated cost avoidance of more than $2.8 million during this time frame of the project.
Crosstalk between the Notch signaling pathway and non-coding RNAs in gastrointestinal cancers
Pan, Yangyang; Mao, Yuyan; Jin, Rong; Jiang, Lei
2018-01-01
The Notch signaling pathway is one of the main signaling pathways that mediates direct contact between cells, and is essential for normal development. It regulates various cellular processes, including cell proliferation, apoptosis, migration, invasion, angiogenesis and metastasis. It additionally serves an important function in tumor progression. Non-coding RNAs mainly include small microRNAs, long non-coding RNAs and circular RNAs. At present, a large body of literature supports the biological significance of non-coding RNAs in tumor progression. It is also becoming increasingly evident that cross-talk exists between Notch signaling and non-coding RNAs. The present review summarizes the current knowledge of Notch-mediated gastrointestinal cancer cell processes, and the effect of the crosstalk between the three major types of non-coding RNAs and the Notch signaling pathway on the fate of gastrointestinal cancer cells. PMID:29285185
Budisan, Liviuta; Gulei, Diana; Zanoaga, Oana Mihaela; Irimie, Alexandra Iulia; Chira, Sergiu; Braicu, Cornelia; Gherman, Claudia Diana; Berindan-Neagoe, Ioana
2017-01-01
Phytochemicals are natural compounds synthesized as secondary metabolites in plants, representing an important source of molecules with a wide range of therapeutic applications. These natural agents are important regulators of key pathological processes/conditions, including cancer, as they are able to modulate the expression of coding and non-coding transcripts with an oncogenic or tumour suppressor role. These natural agents are currently exploited for the development of therapeutic strategies alone or in tandem with conventional treatments for cancer. The aim of this paper is to review the recent studies regarding the role of these natural phytochemicals in different processes related to cancer inhibition, including apoptosis activation, angiogenesis and metastasis suppression. From the large palette of phytochemicals we selected epigallocatechin gallate (EGCG), caffeic acid phenethyl ester (CAPE), genistein, morin and kaempferol, due to their increased activity in modulating multiple coding and non-coding genes, targeting the main hallmarks of cancer. PMID:28587155
Budisan, Liviuta; Gulei, Diana; Zanoaga, Oana Mihaela; Irimie, Alexandra Iulia; Sergiu, Chira; Braicu, Cornelia; Gherman, Claudia Diana; Berindan-Neagoe, Ioana
2017-06-01
Phytochemicals are natural compounds synthesized as secondary metabolites in plants, representing an important source of molecules with a wide range of therapeutic applications. These natural agents are important regulators of key pathological processes/conditions, including cancer, as they are able to modulate the expression of coding and non-coding transcripts with an oncogenic or tumour suppressor role. These natural agents are currently exploited for the development of therapeutic strategies alone or in tandem with conventional treatments for cancer. The aim of this paper is to review the recent studies regarding the role of these natural phytochemicals in different processes related to cancer inhibition, including apoptosis activation, angiogenesis and metastasis suppression. From the large palette of phytochemicals we selected epigallocatechin gallate (EGCG), caffeic acid phenethyl ester (CAPE), genistein, morin and kaempferol, due to their increased activity in modulating multiple coding and non-coding genes, targeting the main hallmarks of cancer.
Preliminary Assessment of Turbomachinery Codes
NASA Technical Reports Server (NTRS)
Mazumder, Quamrul H.
2007-01-01
This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.
Case, Laura K; Pineda, Jaime; Ramachandran, Vilayanur S
2015-01-01
Motor imagery and perception- considered generally as forms of motor simulation- share overlapping neural representations with motor production. While much research has focused on the extent of this “common coding,” less attention has been paid to how these overlapping representations interact. How do imagined, observed, or produced actions influence one another, and how do we maintain control over our perception and behavior? In the first part of this review we describe interactions between motor production and motor simulation, and explore apparent regulatory mechanisms that balance these processes. Next, we consider the somatosensory system. Numerous studies now support a “sensory mirror system” comprised of neural representations activated by either afferent sensation or vicarious sensation. In the second part of this review we summarize evidence for shared representations of sensation and sensory simulation (including imagery and observed sensation), and suggest that similar interactions and regulation of simulation occur in the somatosensory domain as in the motor domain. We suggest that both motor and somatosensory simulations are flexibly regulated to support simulations congruent with our sensorimotor experience and goals and suppress or separate the influence of those that are not. These regulatory mechanisms are frequently revealed by cases of brain injury but can also be employed to facilitate sensorimotor rehabilitation. PMID:25863237
ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers
NASA Astrophysics Data System (ADS)
Torrent, Marc
2014-03-01
For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization algorithm, as well as the use of external optimized librairies. Part of this work has been supported by the european Prace project (PaRtnership for Advanced Computing in Europe) in the framework of its workpackage 8.
Startsev, N; Dimov, P; Grosche, B; Tretyakov, F; Schüz, J; Akleyev, A
2015-01-01
To follow up populations exposed to several radiation accidents in the Southern Urals, a cause-of-death registry was established at the Urals Center capturing deaths in the Chelyabinsk, Kurgan and Sverdlovsk region since 1950. When registering deaths over such a long time period, quality measures need to be in place to maintain quality and reduce the impact of individual coders as well as quality changes in death certificates. To ensure the uniformity of coding, a method for semi-automatic coding was developed, which is described here. Briefly, the method is based on a dynamic thesaurus, database-supported coding and parallel coding by two different individuals. A comparison of the proposed method for organizing the coding process with the common procedure of coding showed good agreement, with, at the end of the coding process, 70 - 90% agreement for the three-digit ICD -9 rubrics. The semi-automatic method ensures a sufficiently high quality of coding by at the same time providing an opportunity to reduce the labor intensity inherent in the creation of large-volume cause-of-death registries.
Conceptual-driven classification for coding advise in health insurance reimbursement.
Li, Sheng-Tun; Chen, Chih-Chuan; Huang, Fernando
2011-01-01
With the non-stop increases in medical treatment fees, the economic survival of a hospital in Taiwan relies on the reimbursements received from the Bureau of National Health Insurance, which in turn depend on the accuracy and completeness of the content of the discharge summaries as well as the correctness of their International Classification of Diseases (ICD) codes. The purpose of this research is to enforce the entire disease classification framework by supporting disease classification specialists in the coding process. This study developed an ICD code advisory system (ICD-AS) that performed knowledge discovery from discharge summaries and suggested ICD codes. Natural language processing and information retrieval techniques based on Zipf's Law were applied to process the content of discharge summaries, and fuzzy formal concept analysis was used to analyze and represent the relationships between the medical terms identified by MeSH. In addition, a certainty factor used as reference during the coding process was calculated to account for uncertainty and strengthen the credibility of the outcome. Two sets of 360 and 2579 textual discharge summaries of patients suffering from cerebrovascular disease was processed to build up ICD-AS and to evaluate the prediction performance. A number of experiments were conducted to investigate the impact of system parameters on accuracy and compare the proposed model to traditional classification techniques including linear-kernel support vector machines. The comparison results showed that the proposed system achieves the better overall performance in terms of several measures. In addition, some useful implication rules were obtained, which improve comprehension of the field of cerebrovascular disease and give insights to the relationships between relevant medical terms. Our system contributes valuable guidance to disease classification specialists in the process of coding discharge summaries, which consequently brings benefits in aspects of patient, hospital, and healthcare system. Copyright © 2010 Elsevier B.V. All rights reserved.
Scheduling observational and physical practice: influence on the coding of simple motor sequences.
Ellenbuerger, Thomas; Boutin, Arnaud; Blandin, Yannick; Shea, Charles H; Panzer, Stefan
2012-01-01
The main purpose of the present experiment was to determine the coordinate system used in the development of movement codes when observational and physical practice are scheduled across practice sessions. The task was to reproduce a 1,300-ms spatial-temporal pattern of elbow flexions and extensions. An intermanual transfer paradigm with a retention test and two effector (contralateral limb) transfer tests was used. The mirror effector transfer test required the same pattern of homologous muscle activation and sequence of limb joint angles as that performed or observed during practice, and the non-mirror effector transfer test required the same spatial pattern movements as that performed or observed. The test results following the first acquisition session replicated the findings of Gruetzmacher, Panzer, Blandin, and Shea (2011) . The results following the second acquisition session indicated a strong advantage for participants who received physical practice in both practice sessions or received observational practice followed by physical practice. This advantage was found on both the retention and the mirror transfer tests compared to the non-mirror transfer test. These results demonstrate that codes based in motor coordinates can be developed relatively quickly and effectively for a simple spatial-temporal movement sequence when participants are provided with physical practice or observation followed by physical practice, but physical practice followed by observational practice or observational practice alone limits the development of codes based in motor coordinates.
The orchestration of occupation: the dance of mothers.
Larson, E A
2000-01-01
This article describes the relationship of mothers' orchestration of daily occupations, the specialized maternal work of parenting a child with a disability, and the mother's subjective well-being. Mothers' daily occupations and subjective well-being were studied using multiple in-depth interviews, participant observation of a day's round of occupations, and scales of well-being. Data were treated to a recursive analysis, which included theoretical notes generated during transcriptions that identified important themes and additional points of inquiry, line-by-line coding of transcripts, and theoretical sorting of codes and regrouping, recoding. To account for patterns in the data, a relational analysis was conducted that included the generation of metaphors. Emergent findings of this analysis identified the mothers' guiding occupational motif and eight processes of orchestration in their daily routines. The occupational motif, the embrace of paradox, directed the mother's orchestration of daily occupations. The orchestration processes included planning, organizing, balancing, anticipating, interpreting, forecasting, perspective shifting, and meaning making. Examples illustrate the maternally driven and child-sensitive nature of these processes. In their daily rounds, the mothers studied were attentive to the manner and method with which they interacted with their children to produce child-contingent occupations commensurate with their values of being a good mother. Using these orchestration processes, mothers made sense of their past, designed their present, and planned for their future within their daily occupational rounds for themselves and family members.
Caregiver Cognition and Behavior in Day-Care Classrooms.
ERIC Educational Resources Information Center
Holloway, Susan D.
A study examined the relationship between change in daycare children's classroom behavior and the teacher's socialization behavior. Various behaviors of 69 children in 24 classrooms were observed and coded in the fall and spring of the school year. Observers coded teacher behavior according to the Caregiver Interaction Scale, which assesses…
Experience in highly parallel processing using DAP
NASA Technical Reports Server (NTRS)
Parkinson, D.
1987-01-01
Distributed Array Processors (DAP) have been in day to day use for ten years and a large amount of user experience has been gained. The profile of user applications is similar to that of the Massively Parallel Processor (MPP) working group. Experience has shown that contrary to expectations, highly parallel systems provide excellent performance on so-called dirty problems such as the physics part of meteorological codes. The reasons for this observation are discussed. The arguments against replacing bit processors with floating point processors are also discussed.
Observing heliospheric neutral atoms at 1 AU
NASA Astrophysics Data System (ADS)
Heerikhuisen, Jacob; Pogorelov, Nikolai; Florinski, Vladimir; Zank, Gary
2006-09-01
Although in situ observations of distant heliospheric plasma by the Voyagers has proven to be extremely enlightening, such point observations need to be complemented with global measurements taken remotely to obtain a complete picture of the heliosphere and local interstellar environment. Neutral atoms, with their contempt for magnetic fields, provide useful probes of the plasma that generated them. However, there will be a number of ambiguities in neutral atom readings that require a deeper understanding of the plasma processes generating neutral atoms, as well as the loss mechanisms on their flight to the observation point. We introduce a procedure for generating all-sky maps of energetic H-atoms, calculated directly in our Monte-Carlo neutral atom code. Results obtained for a self-consistent axisymmetric MHD-Boltzmann calculation, as well as several non-selfconsistent 3D sky maps, will be presented.
Nested polynomial trends for the improvement of Gaussian process-based predictors
NASA Astrophysics Data System (ADS)
Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.
2017-10-01
The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.
Di Giulio, Massimo
2017-02-07
Whereas it is extremely easy to prove that "if the biosynthetic relationships between amino acids were fundamental in the structuring of the genetic code, then their physico-chemical properties might also be revealed in the genetic code table"; it is, on the contrary, impossible to prove that "if the physico-chemical properties of amino acids were fundamental in the structuring of the genetic code, then the presence of the biosynthetic relationships between amino acids should not be revealed in the genetic code". And, given that in the genetic code table are mirrored both the biosynthetic relationships between amino acids and their physico-chemical properties, all this would be a test that would falsify the physico-chemical theories of the origin of the genetic code. That is to say, if the physico-chemical properties of amino acids had a fundamental role in organizing the genetic code, then we would not have duly revealed the presence - in the genetic code - of the biosynthetic relationships between amino acids, and on the contrary this has been observed. Therefore, this falsifies the physico-chemical theories of genetic code origin. Whereas, the coevolution theory of the origin of the genetic code would be corroborated by this analysis, because it would be able to give a description of evolution of the genetic code more coherent with the indisputable empirical observations that link both the biosynthetic relationships of amino acids and their physico-chemical properties to the evolutionary organization of the genetic code. Copyright © 2016 Elsevier Ltd. All rights reserved.
Two-Dimensional Parson's Puzzles: The Concept, Tools, and First Observations
ERIC Educational Resources Information Center
Ihantola, Petri; Karavirta, Ville
2011-01-01
Parson's programming puzzles are a family of code construction assignments where lines of code are given, and the task is to form the solution by sorting and possibly selecting the correct code lines. We introduce a novel family of Parson's puzzles where the lines of code need to be sorted in two dimensions. The vertical dimension is used to order…