NASA Astrophysics Data System (ADS)
Murray, B.; Barnes, M. H.; Chambers, L. H.; Pippin, M. R.; Martin, A. M.; Geyer, A. J.; Leber, M.; Joyner, E.; Small, C.; Dublin, D.
2013-12-01
The Minority University Research and Education Program (MUREP) NASA Innovations in Climate Education (NICE) project advances NASA's Office of Education's strategic initiative to improve the quality of the nation's Science, Technology, Engineering and Mathematics (STEM) education and enhance literacy about climate and other Earth systems environmental changes. NICE also strategically supports the United States' progressive initiative to enhance the science and technology enterprise for successful competition in the 21st century global community. To extend to wider networks in 2013, MUREP NICE partnered with the NASA Digital Learning Network (DLNTM) in a unique, non-traditional collaborative model to significantly increase the impact and connection with formal and informal educators, curriculum developers, science education specialists, and researchers regarding climate literacy. DLN offers an expansive distance learning capability that bridges presenters with education audiences for interactive, web-based, synchronous and asynchronous Educator Professional Development (EPD). DLN services over 10,000 educators each year. In 3rd quarter FY13 alone DLN totaled 3,361 connections with educators. The DLN allows for cost effective (no travel) engagement of multiple geographically dispersed audiences with presenters from remote locations. This facilitates interactive communication among participants through distance education, allowing them to share local experiences with one another. A comprehensive four-part EPD workshop, featuring several NICE Principal Investigators (PI) and NASA subject matter experts was developed for NICE in April 2013. Topics covered in the workshop progressed from a simple introduction of Earth's energy budget, through explanation of temperature data collection and evidence of temperature rise, impacts on phenology, and finally consequences for bugs and birds. This event was an innovative hybrid workshop, connecting onsite teachers interactively with remotely connected participants and presenters across the nation. In addition to the 19 educators who participated live, 298 watched the sessions via a webcast. A similar workshop series experienced 300% growth in 2 years indicating the potential for comparable growth of NICE events. Due to unanimous requests for more information on these and other topics, beginning Fall 2013, NICE will reach into additional educators' classrooms via the DLN to deliver continued EPD from NICE PIs and other NASA researchers. With DLN capability, hundreds of additional unique viewers have been exposed to NICE via the DLN this year. This large-scale effort allows for long term, sustained engagement of the global community. We intend to take advantage of capabilities of the DLN as we continue to scale NICE events to wider audiences. The use of distance education allows for immediate release of new information and more frequent connections, resulting in sustained engagement of participants. This presentation will explore the various successes and future opportunities for expanding the impact of climate literacy via the NASA DLN, a large-scale collaborative network.
Digital Learning Network Education Events for the Desert Research and Technology Studies
NASA Technical Reports Server (NTRS)
Paul, Heather L.; Guillory, Erika R.
2007-01-01
NASA s Digital Learning Network (DLN) reaches out to thousands of students each year through video conferencing and webcasting. As part of NASA s Strategic Plan to reach the next generation of space explorers, the DLN develops and delivers educational programs that reinforce principles in the areas of science, technology, engineering and mathematics. The DLN has created a series of live education videoconferences connecting the Desert Research and Technology Studies (RATS) field test to students across the United States. The programs are also extended to students around the world via live webcasting. The primary focus of the events is the Vision for Space Exploration. During the programs, Desert RATS engineers and scientists inform and inspire students about the importance of exploration and share the importance of the field test as it correlates with plans to return to the Moon and explore Mars. This paper describes the events that took place in September 2006.
Using Long-Distance Scientist Involvement to Enhance NASA Volunteer Network Educational Activities
NASA Astrophysics Data System (ADS)
Ferrari, K.
2012-12-01
Since 1999, the NASA/JPL Solar System Ambassadors (SSA) and Solar System Educators (SSEP) programs have used specially-trained volunteers to expand education and public outreach beyond the immediate NASA center regions. Integrating nationwide volunteers in these highly effective programs has helped optimize agency funding set aside for education. Since these volunteers were trained by NASA scientists and engineers, they acted as "stand-ins" for the mission team members in communities across the country. Through the efforts of these enthusiastic volunteers, students gained an increased awareness of NASA's space exploration missions through Solar System Ambassador classroom visits, and teachers across the country became familiarized with NASA's STEM (Science, Technology, Engineering and Mathematics) educational materials through Solar System Educator workshops; however the scientist was still distant. In 2003, NASA started the Digital Learning Network (DLN) to bring scientists into the classroom via videoconferencing. The first equipment was expensive and only schools that could afford the expenditure were able to benefit; however, recent advancements in software allow classrooms to connect to the DLN via personal computers and an internet connection. Through collaboration with the DLN at NASA's Jet Propulsion Laboratory and the Goddard Spaceflight Center, Solar System Ambassadors and Solar System Educators in remote parts of the country are able to bring scientists into their classroom visits or workshops as guest speakers. The goals of this collaboration are to provide special elements to the volunteers' event, allow scientists opportunities for education involvement with minimal effort, acquaint teachers with DLN services and enrich student's classroom learning experience.;
Digital Learning Network Education Events of NASA's Extreme Environments Mission Operations
NASA Technical Reports Server (NTRS)
Paul, Heather; Guillory, Erika
2007-01-01
NASA's Digital Learning Network (DLN) reaches out to thousands of students each year through video conferencing and web casting. The DLN has created a series of live education videoconferences connecting NASA s Extreme Environment Missions Operations (NEEMO) team to students across the United States. The programs are also extended to students around the world live web casting. The primary focus of the events is the vision for space exploration. During the programs, NEEMO Crewmembers including NASA astronauts, engineers and scientists inform and inspire students about the importance of exploration and share the impact of the project as it correlates with plans to return to the moon and explore the planet Mars. These events highlight interactivity. Students talk live with the aquanauts in Aquarius, the National Oceanic and Atmospheric Administration s underwater laboratory. With this program, NASA continues the Agency s tradition of investing in the nation's education programs. It is directly tied to the Agency's major education goal of attracting and retaining students in science, technology, and engineering disciplines. Before connecting with the aquanauts, the students conduct experiments of their own designed to coincide with mission objectives. This paper describes the events that took place in September 2006.
ERIC Educational Resources Information Center
Van Dyke, Aaron R.; Smith-Carpenter, Jillian
2017-01-01
The majority of undergraduates own a smartphone, yet fewer than half view it as a valuable learning technology. Consequently, a digital laboratory notebook (DLN) was developed for an upper-division undergraduate biochemistry laboratory course using the free mobile application Evernote. The cloud-based DLN capitalized on the unique features of…
Hydrometer in the mantle: dln(Vs)/dln(Vp)
NASA Astrophysics Data System (ADS)
Li, L.; Weidner, D. J.
2010-12-01
The absorption of water into nominally non-hydrous phases is the probable storage mechanism of hydrogen throughout most of the mantle. Thus the water capacity in the mantle is greatest in the transition zone owing to the large water-solubility of ringwoodite and wadsleyite. However, the actual amount of water that is stored there is highly uncertain. Since water is probably brought down by subduction activity, it’s abundance is probably laterally variable. Thus, a metric that is sensitive to variations of water content are good candidates for hydrometers. Here we evaluate the parameter, dln(Vs)/dln(Vp), as such a parameter. It is useful to detect lateral variations of water if the effects of hydration on the parameter are different than those of temperature or composition. We compare the value of dln(Vs)/dln(Vp) due to the temperature with that due to the water content as a function of depth for the upper mantle. We have calculated dln(Vs)/dln(Vp) due to both water and temperature using a density functional theory approach, and available experimental data. Our results indicate that dln(Vs)/dln(Vp) due to water is distinguishable from dln(Vs)/dln(Vp) due to temperature or variations in iron content, particularly in ringwoodite. The difference increases with depth and making the lower part of the transition zone most identifiable as a water reservoir.
Yoo, Jae-Kwang; Braciale, Thomas J.
2014-01-01
IL-21 is a type-I cytokine that has pleiotropic immuno-modulatory effects. Primarily produced by activated T cells including NKT and TFH cells, IL-21 plays a pivotal role in promoting TFH differentiation through poorly understood cellular and molecular mechanisms. Here, employing a mouse model of influenza A virus (IAV) infection, we demonstrate that IL-21, initially produced by NKT cells, promotes TFH differentiation by promoting the migration of late activator antigen presenting cell (LAPC), a recently identified TFH inducer, from the infected lungs into the draining lymph nodes (dLN). LAPC migration from IAV-infected lung into the dLN is CXCR3-CXCL9 dependent. IL-21-induced TNF-α production by conventional T cells is critical to stimulate CXCL9 expression by DCs in the dLN, which supports LAPC migration into the dLN and ultimately facilitates TFH differentiation. Our results reveal a previously unappreciated mechanism for IL-21 modulation of TFH responses during respiratory virus infection. PMID:25251568
Yoon, Heesik; Legge, Kevin L; Sung, Sun-sang J; Braciale, Thomas J
2007-07-01
We have used a TCR-transgenic CD8+ T cell adoptive transfer model to examine the tempo of T cell activation and proliferation in the draining lymph nodes (DLN) in response to respiratory virus infection. The T cell response in the DLN differed for mice infected with different type A influenza strains with the onset of T cell activation/proliferation to the A/JAPAN virus infection preceding the A/PR8 response by 12-24 h. This difference in T cell activation/proliferation correlated with the tempo of accelerated respiratory DC (RDC) migration from the infected lungs to the DLN in response to influenza virus infection, with the migrant RDC responding to the A/JAPAN infection exhibiting a more rapid accumulation in the lymph nodes (i.e., peak migration for A/JAPAN at 18 h, A/PR8 at 24-36 h). Furthermore, in vivo administration of blocking anti-CD62L Ab at various time points before/after infection revealed that the virus-specific CD8+ T cells entered the DLN and activated in a sequential "conveyor belt"-like fashion. These results indicate that the tempo of CD8+ T cell activation/proliferation after viral infection is dependent on the tempo of RDC migration to the DLN and that T cell activation occurs in an ordered sequential fashion.
A critical look at spatial scale choices in satellite-based aerosol indirect effect studies
NASA Astrophysics Data System (ADS)
Grandey, B. S.; Stier, P.
2010-06-01
Analysing satellite datasets over large regions may introduce spurious relationships between aerosol and cloud properties due to spatial variations in aerosol type, cloud regime and synoptic regime climatologies. Using MODerate resolution Imaging Spectroradiometer data, we calculate relationships between aerosol optical depth τa, derived liquid cloud droplet effective number concentration Ne and liquid cloud droplet effective radius re at different spatial scales. Generally, positive values of dlnNe dlnτa are found for ocean regions, whilst negative values occur for many land regions. The spatial distribution of dlnre dlnτa shows approximately the opposite pattern, with generally postive values for land regions and negative values for ocean regions. We find that for region sizes larger than 4°×4°, spurious spatial variations in retrieved cloud and aerosol properties can introduce widespread significant errors to calculations of dlnNe dlnτa and dlnre dlnτa . For regions on the scale of 60°×60°, these methodological errors may lead to an overestimate in global cloud albedo effect radiative forcing of order 80%.
NCAR Earth Observing Laboratory's Data Tracking System
NASA Astrophysics Data System (ADS)
Cully, L. E.; Williams, S. F.
2014-12-01
The NCAR Earth Observing Laboratory (EOL) maintains an extensive collection of complex, multi-disciplinary datasets from national and international, current and historical projects accessible through field project web pages (https://www.eol.ucar.edu/all-field-projects-and-deployments). Data orders are processed through the EOL Metadata Database and Cyberinfrastructure (EMDAC) system. Behind the scenes is the institutionally created EOL Computing, Data, and Software/Data Management Group (CDS/DMG) Data Tracking System (DTS) tool. The DTS is used to track the complete life cycle (from ingest to long term stewardship) of the data, metadata, and provenance for hundreds of projects and thousands of data sets. The DTS is an EOL internal only tool which consists of three subsystems: Data Loading Notes (DLN), Processing Inventory Tool (IVEN), and Project Metrics (STATS). The DLN is used to track and maintain every dataset that comes to the CDS/DMG. The DLN captures general information such as title, physical locations, responsible parties, high level issues, and correspondence. When the CDS/DMG processes a data set, IVEN is used to track the processing status while collecting sufficient information to ensure reproducibility. This includes detailed "How To" documentation, processing software (with direct links to the EOL Subversion software repository), and descriptions of issues and resolutions. The STATS subsystem generates current project metrics such as archive size, data set order counts, "Top 10" most ordered data sets, and general information on who has ordered these data. The DTS was developed over many years to meet the specific needs of the CDS/DMG, and it has been successfully used to coordinate field project data management efforts for the past 15 years. This paper will describe the EOL CDS/DMG Data Tracking System including its basic functionality, the provenance maintained within the system, lessons learned, potential improvements, and future developments.
A critical look at spatial scale choices in satellite-based aerosol indirect effect studies
NASA Astrophysics Data System (ADS)
Grandey, B. S.; Stier, P.
2010-12-01
Analysing satellite datasets over large regions may introduce spurious relationships between aerosol and cloud properties due to spatial variations in aerosol type, cloud regime and synoptic regime climatologies. Using MODerate resolution Imaging Spectroradiometer data, we calculate relationships between aerosol optical depth τa derived liquid cloud droplet effective number concentration Ne and liquid cloud droplet effective radius re at different spatial scales. Generally, positive values of dlnNedlnτa are found for ocean regions, whilst negative values occur for many land regions. The spatial distribution of dlnredlnτa shows approximately the opposite pattern, with generally postive values for land regions and negative values for ocean regions. We find that for region sizes larger than 4° × 4°, spurious spatial variations in retrieved cloud and aerosol properties can introduce widespread significant errors to calculations of dlnNedlnτa and dlnredlnτa. For regions on the scale of 60° × 60°, these methodological errors may lead to an overestimate in global cloud albedo effect radiative forcing of order 80% relative to that calculated for regions on the scale of 1° × 1°.
NASA Astrophysics Data System (ADS)
Jana, Sukhendu; Das, Sayan; De, Debasish; Mondal, Anup; Gangopadhyay, Utpal
2018-02-01
Presently, silicon nitride (SiN x ) is widely used as antireflection coating (ARC) on p-type silicon solar cell. But, two highly toxic gasses ammonia and silane are used. In the present study, the ARC and passivation properties of diamond-like nanocomposite (DLN) thin film on silicon solar cell have been investigated. The DLN thin film has been deposited by rf-PACVD process using liquid precursor HMDSO in argon plasma. The film has been characterized by FESEM, HRTEM, FTIR, and Raman spectroscopy. The optical properties have been estimated by UV-vis-NIR spectroscopy. The minimum reflection has been achieved to 0.75% at 630 nm. Both the short circuit current density and open circuit voltage has been increased significantly from 28.6 mA cm-2 to 35.5 mA cm-2 and 0.551 V to 0.613 V respectively. The field effect passivation has been confirmed by dark IV characterization of c-Si /DLN heterojunction structure. All these lead to enhancement of efficiency by almost 4% absolute, which is comparable to SiN x . The ammonia and silane free deposited DLN thin film has a great potential to use as ARC for silicon based solar cell.
Lu, Mingfang; Munford, Robert S.
2011-01-01
The extraordinary potency and pathological relevance of Gram-negative bacterial lipopolysaccharides (LPSs) have made them very popular experimental agonists, yet little is known about what happens to these stimulatory molecules within animal tissues. We tracked fluorescent and radiolabeled-LPS from a subcutaneous inoculation site to its draining lymph nodes (DLN), blood and liver. Although we found FITC-labeled LPS in DLN within minutes of injection, drainage of radiolabeled LPS continued for more than six weeks. Within the DLN, most of the LPS was found in the subcapsular sinus or medulla, near or within lymphatic endothelial cells and CD169+ macrophages. Whereas most of the LPS seemed to pass through the DLN without entering B cell follicles, by 24 hrs after injection a small amount of LPS was found in the paracortex. In wildtype mice, ≥70% of the injected radiolabeled-LPS underwent inactivation by deacylation before it left the footpad; in animals that lacked acyloxyacyl hydrolase, the LPS-deacylating enzyme, prolonged drainage of fully acylated (active) LPS boosted polyclonal IgM and IgG3 antibody titers. LPS egress from a subcutaneous injection site thus occurred over many weeks and was mainly via lymphatic channels. Its immunological potency, as measured by its ability to stimulate polyclonal antibody production, was greatly influenced by the kinetics of both lymphatic drainage and enzymatic inactivation. PMID:21849675
Two photon microscopy intravital study of DC-mediated anti-tumor response of NK cells
NASA Astrophysics Data System (ADS)
Caccia, Michele; Gorletta, Tatiana; Sironi, Laura; Zanoni, Ivan; Salvetti, Cristina; Collini, Maddalena; Granucci, Francesca; Chirico, Giuseppe
2010-02-01
Recent studies have demonstrated that dendritic cells (DCs) play a crucial role in the activation of Natural Killer cells (NKs) that are responsible for anti-tumor innate immune responses. The focus of this report is on the role of pathogen associated molecular pattern (PAMP) activated-DCs in inducing NK cell-mediated anti-tumor responses. Mice transplanted sub-cute (s.c.) with AK7 cells, a mesothelioma cell line sensitive to NK cell responses, are injected with fluorescent NK cells and DC activation is then induced by s.c. injection of Lipopolysaccharide (LPS). Using 4 dimensional tracking we follow the kinetic behavior of NK cells at the Draining Lymph-Node (DLN). As control, noninflammatory conditions are also evaluated. Our data suggest that NK cells are recruited to the DLN where they can interact with activated-DCs with a peculiar kinetic behavior: short lived interactions interleaved by rarer longer ones. We also found that the changes in the NK dynamic behavior in inflammatory conditions clearly affect relevant motility parameters such as the instantaneous and average velocity and the effective diffusion coefficient. This observation suggests that NK cells and activated-DCs might efficiently interact in the DLN, where cells could be activated. Therefore the interaction between activated-DCs and NK cells in DLN is not only a reality but it may be also crucial for the start of the immune response of the NKs.
Mast Cells Condition Dendritic Cells to Mediate Allograft Tolerance
de Vries, Victor C.; Pino-Lagos, Karina; Nowak, Elizabeth C.; Bennett, Kathy A.; Oliva, Carla; Noelle, Randolph J.
2013-01-01
SUMMARY Peripheral tolerance orchestrated by regulatory T cells, dendritic cells (DCs), and mast cells (MCs) has been studied in several models including skin allograft tolerance. We now define a role for MCs in controlling DC behavior (“conditioning”) to facilitate tolerance. Under tolerant conditions, we show that MCs mediated a marked increase in tumor necrosis factor (TNFα)-dependent accumulation of graft-derived DCs in the dLN compared to nontolerant conditions. This increase of DCs in the dLN is due to the local production of granulocyte macrophage colony-stimulating factor (GM-CSF) by MCs that induces a survival advantage of graft-derived DCs. DCs that migrated to the dLN from the tolerant allograft were tolerogenic; i.e., they dominantly suppress T cell responses and control regional immunity. This study underscores the importance of MCs in conditioning DCs to mediate peripheral tolerance and shows a functional impact of peripherally produced TNFα and GM-CSF on the migration and function of tolerogenic DCs. PMID:22035846
Richner, Justin M; Gmyrek, Grzegorz B; Govero, Jennifer; Tu, Yizheng; van der Windt, Gerritje J W; Metcalf, Talibah U; Haddad, Elias K; Textor, Johannes; Miller, Mark J; Diamond, Michael S
2015-07-01
Impaired immune responses in the elderly lead to reduced vaccine efficacy and increased susceptibility to viral infections. Although several groups have documented age-dependent defects in adaptive immune priming, the deficits that occur prior to antigen encounter remain largely unexplored. Herein, we identify novel mechanisms for compromised adaptive immunity that occurs with aging in the context of infection with West Nile virus (WNV), an encephalitic flavivirus that preferentially causes disease in the elderly. An impaired IgM and IgG response and enhanced vulnerability to WNV infection during aging was linked to delayed germinal center formation in the draining lymph node (DLN). Adoptive transfer studies and two-photon intravital microscopy revealed a decreased trafficking capacity of donor naïve CD4+ T cells from old mice, which manifested as impaired T cell diapedesis at high endothelial venules and reduced cell motility within DLN prior to antigen encounter. Furthermore, leukocyte accumulation in the DLN within the first few days of WNV infection or antigen-adjuvant administration was diminished more generally in old mice and associated with a second aging-related defect in local cytokine and chemokine production. Thus, age-dependent cell-intrinsic and environmental defects in the DLN result in delayed immune cell recruitment and antigen recognition. These deficits compromise priming of early adaptive immune responses and likely contribute to the susceptibility of old animals to acute WNV infection.
Ikebuchi, Ryoyo; Teraguchi, Shunsuke; Vandenbon, Alexis; Honda, Tetsuya; Shand, Francis H W; Nakanishi, Yasutaka; Watanabe, Takeshi; Tomura, Michio
2016-10-19
Foxp3 + regulatory T cells (Tregs) migrating from the skin to the draining lymph node (dLN) have a strong immunosuppressive effect on the cutaneous immune response. However, the subpopulations responsible for their inhibitory function remain unclear. We investigated single-cell gene expression heterogeneity in Tregs from the dLN of inflamed skin in a contact hypersensitivity model. The immunosuppressive genes Ctla4 and Tgfb1 were expressed in the majority of Tregs. Although Il10-expressing Tregs were rare, unexpectedly, the majority of Il10-expressing Tregs co-expressed Gzmb and displayed Th1-skewing. Single-cell profiling revealed that CD43 + CCR5 + Tregs represented the main subset within the Il10/Gzmb-expressing cell population in the dLN. Moreover, CD43 + CCR5 + CXCR3 - Tregs expressed skin-tropic chemokine receptors, were preferentially retained in inflamed skin and downregulated the cutaneous immune response. The identification of a rare Treg subset co-expressing multiple immunosuppressive molecules and having tissue-remaining capacity offers a novel strategy for the control of skin inflammatory responses.
Very delayed lupus nephritis: a report of three cases and literature review.
Alexandre, André R; Carreira, Pedro L; Isenberg, David A
2018-01-01
Lupus nephritis (LN) affects up to 50% of patients with Systemic Lupus Erythematosus (SLE) and is associated with a worse prognosis. LN usually develops within the first 5 years of the onset of the disease. We report three patients with very delayed LN (DLN) diagnosed after 15 or more years after SLE diagnosis. The three patients were non-Caucasian women with adolescent or adult-onset SLE. Each had antinuclear, anti-dsDNA and anti-Ro antibodies. Hydroxychloroquine was prescribed for each. Their disease courses were characterised by sporadic non-renal flares controlled by steroids and, in two cases, by one cycle of rituximab. Unexpectedly, they developed proteinuria, haematuria and lowering of estimated glomerular filtration rate with clinical signs of renal disease. LN was confirmed by renal biopsy. Reviewing them, each showed serological signs of increasing disease activity (rising levels of anti-dsDNA antibodies and fall in C3) that predated clinical or laboratory signs of LN by 1-3 years. Reviewing the literature, we found a lack of knowledge about DLN starting more than 15 years after SLE diagnosis. With the increasing life expectancy of patients with SLE it is likely that more cases of very DLN will emerge.
Peng, Yu-Huei; Shih, Yang-hsin; Lai, Yen-Chun; Liu, Yuan-Zan; Liu, Ying-Tong; Lin, Nai-Chun
2014-01-01
The increasing usage and the persistence of polyester polyurethane (PU) generate significant sources of environmental pollution. The effective and environmental friendly bioremediation techniques for this refractory waste are in high demand. In this study, three novel PU degrading bacteria were isolated from farm soils and activated sludge. Based upon 16S ribosomal RNA gene sequence blast, their identities were determined. Particularly robust activity was observed in Pseudomonas putida; it spent 4 days to degrade 92% of Impranil DLN(TM) for supporting its growth. The optimum temperature and pH for DLN removal by P. putida were 25 °C and 8.4, respectively. The degradation and transformation of DLN investigated by Fourier transformed infrared spectroscopy show the decrease in ester functional group and the emergence of amide group. The polyurethanolytic activities were both presented in the extracellular fraction and in the cytosol. Esterase activity was detected in the cell lysate. A 45-kDa protein bearing polyurethanolytic activity was also detected in the extracellular medium. This study presented high PU degrading activity of P. putida and demonstrated its responsible enzymes during the PU degradation process, which could be applied in the bioremediation and management of plastic wastes.
Robustness of Global Radial Anisotropy Models of the Upper Mantle
NASA Astrophysics Data System (ADS)
Xing, Z.; Beghein, C.; Yuan, K.
2014-12-01
Radial anisotropy provides important constraints on mantle deformation. While its presence is well accepted in the uppermost mantle, large discrepancies remain among existing models, even at depths well sampled by seismic data, and its presence at greater depths is highly uncertain. Surface wave phase velocity dispersion measurements are routinely used to constrain lateral variations in mantle S-wave velocity (dlnVS) and radial anisotropy (ξ=VSH2/VSV2). Here, we employed the fundamental and higher mode surface wave phase velocity maps of Visser et al. (2008) that have unprecedented sensitivity to structure down to 800-1000km depth, and we adopted a probabilistic forward modeling approach, the Neighbourhood Algorithm, to quantify posterior model uncertainties and parameter trade-offs. We investigated the effect of prior crustal corrections on 3-D ξ and dlnVS models. To avoid mapping crustal structure onto mantle heterogeneities, it is indeed important to accurately account for 3-D crustal anomalies and variations in Moho depth. One approach is to solve the non-linear problem and simultaneously constrain Moho depth and mantle anomalies (Visser et al., 2008). Another approach, taken here, is to calculate non-linear crustal corrections with an a priori crustal model, which are then applied to the phase velocity maps before inverting the remaining signal for mantle structure. In this work, we also determined laterally varying sensitivity kernels to account for lateral changes in the crust. We compare models obtained using CRUST2.0 (Bassin et al., 2000) and the new CRUST1.0 (Laske et al., 2012) models, which mostly differ under continents. Our preliminary results show strong differences (ΔdlnVS>2%) between the two models in continental dlnVS for the upper 150-200km, and strong changes in x amplitudes in the top 200km (Δξ>2%). Some of the differences in ξ persist down to the transition zone, in particular beneath central Asia and South America. Despite these discrepancies, inferences on the depth of continental roots (~200-250km) based on either the extent of the dlnVS>0 anomalies or the depth at which ξ changes sign remain independent of the crustal model employed. We also note that VSV>VSH dominates the deep upper mantle except in central Pacific, which is characterized by VSH>VSV down to the transition zone.
NASA Astrophysics Data System (ADS)
Pimenov, S. M.; Zavedeev, E. V.; Arutyunyan, N. R.; Zilova, O. S.; Shupegin, M. L.; Jaeggi, B.; Neuenschwander, B.
2017-10-01
Laser surface micropatterning (texturing) of hard materials and coatings is an effective technique to improve tribological systems. In the paper, we have investigated the laser-induced surface modifications and micropatterning of diamond-like nanocomposite (DLN) films (a-C:H,Si:O) using IR and visible femtosecond (fs) lasers, focusing on the improvement of frictional properties of laser-patterned films on the micro and macroscale. The IR and visible fs-lasers, operating at λ = 1030 nm and λ = 515 nm wavelengths (pulse duration 320 fs and pulse repetition rate 101 kHz), are used to fabricate different patterns for subsequent friction tests. The IR fs-laser is applied to produce hill-like micropatterns under conditions of surface graphitization and incipient ablation, and the visible fs-laser is used for making microgroove patterns in DLN films under ablation conditions. Regimes of irradiation with low-energy IR laser pulses are chosen to produce graphitized micropatterns. For these regimes, results of numerical calculations of the temperature and graphitized layer growth are presented to show good correlation with surface relief modifications, and the features of fs-laser graphitization are discussed based on Raman spectroscopy analysis. Using lateral force microscopy, the role of surface modifications (graphitization, nanostructuring) in the improved microfriction properties is investigated. New data of the influence of capillary forces on friction forces, which strongly changes the microscale friction behaviour, are presented for a wide range of loads (from nN to μN) applied to Si tips. In macroscopic ball-on-disk tests, a pair-dependent friction behaviour of laser-patterned films is observed. The first experimental data of the improved friction properties of laser-micropatterned DLN films under boundary lubricated sliding conditions are presented. The obtained results show the DLN films as an interesting coating material suitable for laser patterning applications in tribology.
Iwami, Daiki; Brinkman, C Colin; Bromberg, Jonathan S
2015-04-01
Circulation of leukocytes via blood, tissue and lymph is integral to adaptive immunity. Afferent lymphatics form CCL21 gradients to guide dendritic cells and T cells to lymphatics and then to draining lymph nodes (dLN). Vascular endothelial growth factor C and vascular endothelial growth factor receptor 3 (VEGFR-3) are the major lymphatic growth factor and receptor. We hypothesized these molecules also regulate chemokine gradients and lymphatic migration. CD4 T cells were injected into the foot pad or ear pinnae, and migration to afferent lymphatics and dLN quantified by flow cytometry or whole mount immunohistochemistry. Vascular endothelial growth factor receptor 3 or its signaling or downstream actions were modified with blocking monoclonal antibodies (mAbs) or other reagents. Anti-VEGFR-3 prevented migration of CD4 T cells into lymphatic lumen and significantly decreased the number that migrated to dLN. Anti-VEGFR-3 abolished CCL21 gradients around lymphatics, although CCL21 production was not inhibited. Heparan sulfate (HS), critical to establish CCL21 gradients, was down-regulated around lymphatics by anti-VEGFR-3 and this was dependent on heparanase-mediated degradation. Moreover, a Phosphoinositide 3-kinase (PI3K)α inhibitor disrupted HS and CCL21 gradients, whereas a PI3K activator prevented the effects of anti-VEGFR-3. During contact hypersensitivity, VEGFR-3, CCL21, and HS expression were all attenuated, and anti-heparanase or PI3K activator reversed these effects. Vascular endothelial growth factor C/VEGFR-3 signaling through PI3Kα regulates the activity of heparanase, which modifies HS and CCL21 gradients around lymphatics. The functional and physical linkages of these molecules regulate lymphatic migration from tissues to dLN. These represent new therapeutic targets to influence immunity and inflammation.
LeWinter, Robin D.; Scherrer, Grégory; Basbaum, Allan I.
2008-01-01
The transient receptor potential cation channel TRPV2 is a member of the TRPV family of proteins and is a homologue of the capsaicin/vanilloid receptor (TRPV1). Like TRPV1, TRPV2 is expressed in a subset of dorsal root ganglia (DRG) neurons that project to superficial laminae of the spinal cord dorsal horn. Because noxious heat (>52°C) activates TRPV2 in transfected cells this channel has been implicated in the processing of high intensity thermal pain messages in vivo. In contrast to TRPV1, however, which is restricted to small diameter DRG neurons, there is significant TRPV2 immunoreactivity in a variety of CNS regions. The present report focuses on a subset of neurons in the brainstem and spinal cord of the rat including the dorsal lateral nucleus (DLN) of the spinal cord, the nucleus ambiguus, and the motor trigeminal nucleus. Double label immunocytochemistry with markers of motoneurons, combined with retrograde labeling, established that these cells are, in fact, motoneurons. With the exception of their smaller diameter, these cells did not differ from other motoneurons, which are only lightly TRPV2-immunoreactive. As for the majority of DLN neurons, the densely-labeled populations co-express androgen receptor and follow normal DLN ontogeny. The functional significance of the very intense TRPV2 expression in these three distinct spinal cord and brainstem motoneurons groups remains to be determined. PMID:18063314
Topological Dirac line nodes and superconductivity coexist in SnSe at high pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xuliang; Lu, Pengchao; Wang, Xuefei
2017-10-01
We report on the discovery of a pressure-induced topological and superconducting phase of SnSe, a material which attracts much attention recently due to its superior thermoelectric properties. In situ high-pressure electrical transport and synchrotron x-ray diffraction measurements show that the superconductivity emerges along with the formation of a CsCl-type structural phase of SnSe above around 27 GPa, with a maximum critical temperature of 3.2 K at 39 GPa. Based on ab initio calculations, this CsCl-type SnSe is predicted to be a Dirac line-node (DLN) semimetal in the absence of spin-orbit coupling, whose DLN states are protected by the coexistence ofmore » time-reversal and inversion symmetries. These results make CsCl-type SnSe an interesting model platform with simple crystal symmetry to study the interplay of topological physics and superconductivity.« less
Altered Sympathetic-to-Immune Cell Signaling via β 2-Adrenergic Receptors in Adjuvant Arthritis
Bellinger, Denise L.; Schaller, Jill A.; Osredkar, Tracy
2013-01-01
Adjuvant-induced arthritic (AA) differentially affects norepinephrine concentrations in immune organs, and in vivo β-adrenergic receptor (β-AR) agonist treatment distinctly regulates ex vivo cytokine profiles in different immune organs. We examined the contribution of altered β-AR functioning in AA to understand these disparate findings. Twenty-one or 28 days after disease induction, we examined β 2-AR expression in spleen and draining lymph nodes (DLNs) for the arthritic limbs using radioligand binding and western blots and splenocyte β-AR-stimulated cAMP production using enzyme-linked immunoassay (EIA). During severe disease, β-AR agonists failed to induce splenocyte cAMP production, and β-AR affinity and density declined, indicating receptor desensitization and downregulation. Splenocyte β 2-AR phosphorylation (pβ 2-AR) by protein kinase A (pβ 2-ARPKA) decreased in severe disease, and pβ 2-AR by G protein-coupled receptor kinases (pβ 2-ARGRK) increased in chronic disease. Conversely, in DLN cells, pβ 2-ARPKA rose during severe disease, but fell during chronic disease, and pβ 2-ARGRK increased during both disease stages. A similar pβ 2-AR pattern in DLN cells with the mycobacterial cell wall component of complete Freund's adjuvant suggests that pattern recognition receptors (i.e., toll-like receptors) are important for DLN pβ 2-AR patterns. Collectively, our findings indicate lymphoid organ- and disease stage-specific sympathetic dysregulation, possibly explaining immune compartment-specific differences in β 2-AR-mediated regulation of cytokine production in AA and rheumatoid arthritis. PMID:24194774
NASA Astrophysics Data System (ADS)
Murray, B.; Alston, E. J.; Chambers, L. H.; Bynum, A.; Montgomery, C.; Blue, S.; Kowalczak, C.; Leighton, A.; Bosman, L.
2017-12-01
NASA Earth Systems, Technology and Energy Education for Minority University Research & Education Program - MUREP (ESTEEM) activities enhance institutional capacity of minority serving institutions (MSIs) related to Earth System Science, Technology and energy education; in turn, increasing access of underrepresented groups to science careers and opportunities. ESTEEM is a competitive portfolio that has been providing funding to institutions across the United States for 10 years. Over that time 76 separate activities have been funded. Beginning in 2011 ESTEEM awards focused on MSIs and public-school districts with high under-represented enrollment. Today ESTEEM awards focus on American Indian/Alaska Native serving institutions (Tribal Colleges and Universities), the very communities most severely in need of ability to deal with climate adaptation and resiliency. ESTEEM engages a multi-faceted approach to address economic and cultural challenges facing MSI communities. PIs (Principal Investigators) receive support from a management team at NASA, and are supported by a larger network, the ESTEEM Cohort, which connects regularly through video calls, virtual video series and in-person meetings. The cohort acts as a collective unit to foster interconnectivity and knowledge sharing in both physical and virtual settings. ESTEEM partners with NASA's Digital Learning Network (DLNTM) in a unique non-traditional model to leverage technical expertise. DLN services over 10,000 participants each year through interactive web-based synchronous and asynchronous events. These events allow for cost effective (no travel) engagement of multiple, geographically dispersed audiences to share local experiences with one another. Events allow PIs to grow their networks, technical base, professional connections, and develop a sense of community, encouraging expansion into larger and broader interactions. Over 256 connections, beyond the 76 individual members, exist within the cohort. PIs report significant improvement in student retention and increased interest in STEM coursework as outcomes. This presentation will delve into specifics of these metrics, provide details of various successes and explore future opportunities for expanding the impact of large-scale culturally relevant collaborative networks.
Murine Visceral Leishmaniasis: IgM and Polyclonal B-Cell Activation Lead to Disease Exacerbation
Deak, Eszter; Jayakumar, Asha; Wing Cho, Ka; Goldsmith-Pestana, Karen; Dondji, Blaise; Lambris, John D.; McMahon-Pratt, Diane
2010-01-01
In visceral leishmaniasis, the draining lymph node (DLN) is the initial site for colonization and establishment of infection after intradermal transmission by the sand fly vector; however, little is known about the developing immune response within this site. Using an intradermal infection model, which allows for parasite visceralization, we have examined the ongoing immune responses in the DLN of BALB/c mice infected with L. infantum. Although not unexpected, at early times post-infection there is a marked B cell expansion in the DLN, which persists throughout infection. However, the characteristics of this response were of interest; as early as day 7 post-infection, polyclonal antibodies (TNP, OVA, chromatin) were observed and the levels appeared comparable to the specific anti-leishmania response. Although B-cell-deficient JHD BALB/c mice are relatively resistant to infection, neither B-cell-derived IL-10 nor B-cell antigen presentation appear to be primarily responsible for the elevated parasitemia. However, passive transfer and reconstitution of JHD BALB/c with secretory immunoglobulins, (IgM or IgG; specific or non-specific immune complexes) results in increased susceptibility to L. infantum infection. Further, JHD BALB/c mice transgenetically reconstituted to secrete IgM demonstrated exacerbated disease in comparison to wild type BALB/c mice as early as 2 days post-infection. Evidence suggests that complement activation (generation of C5a) and signaling via the C5aR (CD88) is related to the disease exacerbation caused by IgM rather than cytokine levels (IL-10 or IFN-γ). Overall these studies indicate that polyclonal B cell activation, which is known to be associated with human visceral leishmaniasis, is an early and intrinsic characteristic of disease and may represent a target for therapeutic intervention. PMID:20213734
Lewinter, R D; Scherrer, G; Basbaum, A I
2008-01-02
The transient receptor potential cation channel, vanilloid family, type 2 (TRPV2) is a member of the TRPV family of proteins and is a homologue of the capsaicin/vanilloid receptor (transient receptor potential cation channel, vanilloid family, type 1, TRPV1). Like TRPV1, TRPV2 is expressed in a subset of dorsal root ganglia (DRG) neurons that project to superficial laminae of the spinal cord dorsal horn. Because noxious heat (>52 degrees C) activates TRPV2 in transfected cells this channel has been implicated in the processing of high intensity thermal pain messages in vivo. In contrast to TRPV1, however, which is restricted to small diameter DRG neurons, there is significant TRPV2 immunoreactivity in a variety of CNS regions. The present report focuses on a subset of neurons in the brainstem and spinal cord of the rat including the dorsal lateral nucleus (DLN) of the spinal cord, the nucleus ambiguus, and the motor trigeminal nucleus. Double label immunocytochemistry with markers of motoneurons, combined with retrograde labeling, established that these cells are, in fact, motoneurons. With the exception of their smaller diameter, these cells did not differ from other motoneurons, which are only lightly TRPV2-immunoreactive. As for the majority of DLN neurons, the densely-labeled populations co-express androgen receptor and follow normal DLN ontogeny. The functional significance of the very intense TRPV2 expression in these three distinct spinal cord and brainstem motoneurons groups remains to be determined.
Defective Innate Cell Response and Lymph Node Infiltration Specify Yersinia pestis Infection
Guinet, Françoise; Avé, Patrick; Jones, Louis; Huerre, Michel; Carniel, Elisabeth
2008-01-01
Since its recent emergence from the enteropathogen Yersinia pseudotuberculosis, Y. pestis, the plague agent, has acquired an intradermal (id) route of entry and an extreme virulence. To identify pathophysiological events associated with the Y. pestis high degree of pathogenicity, we compared disease progression and evolution in mice after id inoculation of the two Yersinia species. Mortality studies showed that the id portal was not in itself sufficient to provide Y. pseudotuberculosis with the high virulence power of its descendant. Surprisingly, Y. pseudotuberculosis multiplied even more efficiently than Y. pestis in the dermis, and generated comparable histological lesions. Likewise, Y. pseudotuberculosis translocated to the draining lymph node (DLN) and similar numbers of the two bacterial species were found at 24 h post infection (pi) in this organ. However, on day 2 pi, bacterial loads were higher in Y. pestis-infected than in Y. pseudotuberculosis-infected DLNs. Clustering and multiple correspondence analyses showed that the DLN pathologies induced by the two species were statistically significantly different and identified the most discriminating elementary lesions. Y. pseudotuberculosis infection was accompanied by abscess-type polymorphonuclear cell infiltrates containing the infection, while Y. pestis-infected DLNs exhibited an altered tissue density and a vascular congestion, and were typified by an invasion of the tissue by free floating bacteria. Therefore, Y. pestis exceptional virulence is not due to its recently acquired portal of entry into the host, but is associated with a distinct ability to massively infiltrate the DLN, without inducing in this organ an organized polymorphonuclear cell reaction. These results shed light on pathophysiological processes that draw the line between a virulent and a hypervirulent pathogen. PMID:18301765
Defective innate cell response and lymph node infiltration specify Yersinia pestis infection.
Guinet, Françoise; Avé, Patrick; Jones, Louis; Huerre, Michel; Carniel, Elisabeth
2008-02-27
Since its recent emergence from the enteropathogen Yersinia pseudotuberculosis, Y. pestis, the plague agent, has acquired an intradermal (id) route of entry and an extreme virulence. To identify pathophysiological events associated with the Y. pestis high degree of pathogenicity, we compared disease progression and evolution in mice after id inoculation of the two Yersinia species. Mortality studies showed that the id portal was not in itself sufficient to provide Y. pseudotuberculosis with the high virulence power of its descendant. Surprisingly, Y. pseudotuberculosis multiplied even more efficiently than Y. pestis in the dermis, and generated comparable histological lesions. Likewise, Y. pseudotuberculosis translocated to the draining lymph node (DLN) and similar numbers of the two bacterial species were found at 24 h post infection (pi) in this organ. However, on day 2 pi, bacterial loads were higher in Y. pestis-infected than in Y. pseudotuberculosis-infected DLNs. Clustering and multiple correspondence analyses showed that the DLN pathologies induced by the two species were statistically significantly different and identified the most discriminating elementary lesions. Y. pseudotuberculosis infection was accompanied by abscess-type polymorphonuclear cell infiltrates containing the infection, while Y. pestis-infected DLNs exhibited an altered tissue density and a vascular congestion, and were typified by an invasion of the tissue by free floating bacteria. Therefore, Y. pestis exceptional virulence is not due to its recently acquired portal of entry into the host, but is associated with a distinct ability to massively infiltrate the DLN, without inducing in this organ an organized polymorphonuclear cell reaction. These results shed light on pathophysiological processes that draw the line between a virulent and a hypervirulent pathogen.
Astrobiology in an Urban New York City High School: John Dewey High School's Space Science Academy
NASA Astrophysics Data System (ADS)
Fried, B.; Dash, H. B.
2010-04-01
John Dewey High School's participation in NASA's MESDT and DLN projects and other partnerships provide opportunities for our diverse population, focusing particular attention to under-represented and under-served groups in the field of Space Science.
Nash, A A; Quartey-Papafio, R; Wildy, P
1980-08-01
The functional characteristics of lymphoid cells were investigated during acute and latent infection of mice with herpes simplex virus (HSV). Cytotoxic T cells were found in the draining lymph node (DLN) 4 days p.i. and had reached maximum activity between 6 and 9 days. After the 12th day and during the period of latent infection (> 20 days) no cytotoxic cell activity was observed. Cytotoxic activity could only be detected when the lymphoid cells had been cultured for a period of 3 days. In general, the cell killing was specific for syngeneic infected target cells, although some killing of uninfected targets was observed. In contrast to the cytotoxic response, DLN cells responding to HSV in a proliferation assay were detected towards the end of the acute phase and at lease up to 9 months thereafter. The significance of these observations for the pathogenesis of HSV is discussed.
Pressure dependence of the radial mode frequency in carbon nanotubes
NASA Astrophysics Data System (ADS)
Venkateswaran, Uma; Masica, D.; Sumanasekara, G.; Eklund, P.
2003-03-01
Recently, an analytical expression for the radial breathing mode frequency, ω_R, was derived by considering the oscillations of a thin hollow cylinder.[1] Using this result and the experimental pressure-dependence of the elastic and lattice constants of graphite, we show that the pressure derivative of ωR depends inversely on the nanotube diameter, D. Since ωR also depends inversely on D, the above result implies that the logarithmic pressure derivative of ω_R, i.e., dlnω_R/dP should be independent of D. We have performed high-pressure Raman scattering experiments on HiPCO-SWNT bundles using different laser excitations, thereby probing the radial modes from different diameter tubes. These measurements show an increase in dlnω_R/dP with increasing D. This difference between the predictions and experiments suggests that the main contribution to ω_R's pressure dependence in SWNT bundles stems from the tube-tube interactions within the bundle and from pressure-induced distortions to the tube cross-section. [1] G.D. Mahan, Phys. Rev. B 65, 235402 (2002).
Shock compression of preheated silicate liquids: 30 years of progress
NASA Astrophysics Data System (ADS)
Asimow, Paul
2011-06-01
Tom Ahrens and his students pioneered, beginning around 1981, the technique of determining silicate liquid equations of state for geophysical applications using shock compression of pre-heated, encapsulated samples. In the last decade, we have ported this technique to the Caltech two-stage light gas gun and extended several pre-heated liquid Hugoniots to over 125 GPa. We now have enough compositions studied to perform several tests of the theory of linear mixing or, assuming linear mixing, to describe any liquid in the five-component CaO-MgO-FeO-Al2O3-SiO2 system. This data allows us to identify liquid compositions likely to be negatively or neutrally buoyant in the lower mantle and to form a preliminary description of the dynamics of partial melting of solid lower mantle or initial crystallization of a deep mantle magma ocean. The most robust and surprising feature of all studied liquids, which places very strong constraints on microscopic models for silicate liquid compression behavior, is anomalous increase of the Grüneisen parameter upon compression, with remarkably consistent q = dln γ/dlnV = -1.75 +/- 0.25. Thanks to long-term support by the National Science Foundation.
NASA Astrophysics Data System (ADS)
Cobden, Laura; Mosca, Ilaria; Trampert, Jeannot; Ritsema, Jeroen
2012-11-01
Recent experimental studies indicate that perovskite, the dominant lower mantle mineral, undergoes a phase change to post-perovskite at high pressures. However, it has been unclear whether this transition occurs within the Earth's mantle, due to uncertainties in both the thermochemical state of the lowermost mantle and the pressure-temperature conditions of the phase boundary. In this study we compare the relative fit to global seismic data of mantle models which do and do not contain post-perovskite, following a statistical approach. Our data comprise more than 10,000 Pdiff and Sdiff travel-times, global in coverage, from which we extract the global distributions of dln VS and dln VP near the core-mantle boundary (CMB). These distributions are sensitive to the underlying lateral variations in mineralogy and temperature even after seismic uncertainties are taken into account, and are ideally suited for investigating the likelihood of the presence of post-perovskite. A post-perovskite-bearing CMB region provides a significantly closer fit to the seismic data than a post-perovskite-free CMB region on both a global and regional scale. These results complement previous local seismic reflection studies, which have shown a consistency between seismic observations and the physical properties of post-perovskite inside the deep Earth.
N6-Trimethyl-lysine metabolism. 3-Hydroxy-N6-trimethyl-lysine and carnitine biosynthesis.
Hoppel, C L; Cox, R A; Novak, R F
1980-01-01
Rats injected with N6-[Me-3H]trimethyl-lysine excrete in the urine five radioactively labelled metabolites. Two of these identified metabolites are carnitine and 4-trimethylammoniobutyrate. A third metabolite, identified as 5-trimethylammoniopentanoate, is not an intermediate in the biosynthesis of carnitine; the fourth and major metabolite, N2-acetyl-N6-trimethyl-lysine, is not a precursor of carnitine. The remaining metabolite (3-hydroxy-N6-trimethyl-lysine) is converted into trimethylammoniobutyrate and carnitine by rat liver slices and into trimethylammoniobutyrate by rat kidney slices. In rat liver and kidney-slice experiments, radioactivity from DL-N6-trimethyl-[1-14C]lysine and DL-N6-trimethyl-[2-14C]lysine was incorporated into N2-acetyl-N6-trimethyl-lysine and 3-hydroxy-N6-trimethyl-lysine, but not into trimethylammoniobutyrate or carnitine. A procedure was devised to purify milligram quantities of 3-hydroxy-N6-trimethyl-lysine from the urine of rats injected chronically with N6-trimethyl-lysine (100 mg/kg body wt. per day). The structure of 3-hydroxy-N6-trimethyl-lysine was confirmed chemically and by nuclear-magnetic-resonance spectrometry [Novak, Swift & Hoppel (1980) Biochem. J. 188, 521--527]. The sequence for carnitine biosynthesis in liver is: N6-trimethyl-lysine leads to 3-hydryxy-N6-trimethyl-lysine leads to leads to 4-trimethylammoniobutyrate leads to carnitine. PMID:6772168
Tracking quintessence and k-essence in a general cosmological background
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Rupam; Kephart, Thomas W.; Scherrer, Robert J.
We derive conditions for stable tracker solutions for both quintessence and k-essence in a general cosmological background, H{sup 2}{proportional_to}f({rho}). We find that tracker solutions are possible only when {eta}{identical_to}dlnf/dln{rho}{approx_equal}constant, aside from a few special cases, which are enumerated. Expressions for the quintessence or k-essence equation of state are derived as a function of {eta} and the equation of state of the dominant background component.
Application of Logic to Integer Sequences: A Survey
NASA Astrophysics Data System (ADS)
Makowsky, Johann A.
Chomsky and Schützenberger showed in 1963 that the sequence d L (n), which counts the number of words of a given length n in a regular language L, satisfies a linear recurrence relation with constant coefficients for n, or equivalently, the generating function g_L(x)=sumn d_L(n) x^n is a rational function. In this talk we survey results concerning sequences a(n) of natural numbers which satisfy linear recurrence relations over ℤ or ℤ m , and
Relationship of D'' structure with the velocity variations near the inner-core boundary
NASA Astrophysics Data System (ADS)
Luo, Sheng-Nian; Ni, Sidao; Helmberger, Don
2002-06-01
Variations in regional differential times between PKiKP (i) and PKIKP (I) have been attributed to hemispheric P-velocity variations of about 1% in the upper 100 km of the inner core (referred to as HIC). The top of the inner core appears relatively fast beneath Asia where D'' is also fast. An alternative interpretation could be the lateral variation in P velocity at the lowermost outer core (HOC) producing the same differential times. To resolve this issue, we introduce the diffracted PKP phase near the B caustic (Bdiff) in the range of 139-145° epicenter distances, and the corresponding differential times between Bdiff and PKiKP and PKIKP as observed on broadband arrays. Due to the long-wavelength nature of Bdiff, we scaled the S-wave tomography model with k values (k ≡ dlnVs/dlnVp) to obtain large-scale P-wave velocity structure in the lower mantle as proposed by earlier studies. Waveform synthetics of Bdiff constructed with small k's predict complex waveforms not commonly observed, confirming the validity of large scaling factor k. With P-velocity in lower mantle constrained at large scale, the extra travel-time constraint imposed by Bdiff helps to resolve the HOC-HIC issue. Our preliminary results suggest k > 2 for the lowermost mantle and support HIC hypothesis. An important implication is that there appears to be a relationship of D'' velocity structures with the structures near the inner core boundary via core dynamics.
Zhang, Fan; Cheng, Yi-Kan; Li, Wen-Fei; Guo, Rui; Chen, Lei; Sun, Ying; Mao, Yan-Ping; Zhou, Guan-Qun; Liu, Xu; Liu, Li-Zhi; Lin, Ai-Hua; Tang, Ling-Long; Ma, Jun
2015-10-15
To assess the feasibility of elective neck irradiation to level Ib in nasopharyngeal carcinoma (NPC) using intensity-modulated radiation therapy (IMRT). We retrospectively analyzed 1438 patients with newly-diagnosed, non-metastatic and biopsy-proven NPC treated with IMRT. Greatest dimension of level IIa LNs (DLN-IIa) ≥ 20 mm and/or level IIa LNs with extracapsular spread (ES), oropharynx involvement and positive bilateral cervical lymph nodes (CLNs) were independently significantly associated with metastasis to level Ib LN at diagnosis. No recurrence at level Ib was observed in the 904 patients without these characteristics (median follow-up, 38.7 months; range, 1.3-57.8 months), these patients were classified as low risk. Level Ib irradiation was not an independent risk factor for locoregional failure-free survival, distant failure-free survival, failure-free survival or overall survival in low risk patients. The frequency of grade ≥ 2 subjective xerostomia at 12 months after radiotherapy was not significantly different between low risk patients who received level Ib-sparing, unilateral level Ib-covering or bilateral level Ib-covering IMRT. Level Ib-sparing IMRT should be safe and feasible for patients without a DLN-IIa ≥ 20 mm and/or level IIa LNs with ES, positive bilateral CLNs or oropharynx involvement at diagnosis. Further investigations based on specific criteria for dose constraints for the submandibular glands are warranted to confirm the benefit of elective level Ib irradiation.
Phage therapy is effective against infection by Mycobacterium ulcerans in a murine footpad model.
Trigo, Gabriela; Martins, Teresa G; Fraga, Alexandra G; Longatto-Filho, Adhemar; Castro, António G; Azeredo, Joana; Pedrosa, Jorge
2013-01-01
Buruli Ulcer (BU) is a neglected, necrotizing skin disease caused by Mycobacterium ulcerans. Currently, there is no vaccine against M. ulcerans infection. Although the World Health Organization recommends a combination of rifampicin and streptomycin for the treatment of BU, clinical management of advanced stages is still based on the surgical resection of infected skin. The use of bacteriophages for the control of bacterial infections has been considered as an alternative or to be used in association with antibiotherapy. Additionally, the mycobacteriophage D29 has previously been shown to display lytic activity against M. ulcerans isolates. We used the mouse footpad model of M. ulcerans infection to evaluate the therapeutic efficacy of treatment with mycobacteriophage D29. Analyses of macroscopic lesions, bacterial burdens, histology and cytokine production were performed in both M. ulcerans-infected footpads and draining lymph nodes (DLN). We have demonstrated that a single subcutaneous injection of the mycobacteriophage D29, administered 33 days after bacterial challenge, was sufficient to decrease pathology and to prevent ulceration. This protection resulted in a significant reduction of M. ulcerans numbers accompanied by an increase of cytokine levels (including IFN-γ), both in footpads and DLN. Additionally, mycobacteriophage D29 treatment induced a cellular infiltrate of a lymphocytic/macrophagic profile. Our observations demonstrate the potential of phage therapy against M. ulcerans infection, paving the way for future studies aiming at the development of novel phage-related therapeutic approaches against BU.
Beautiful Earth: Inspiring Native American students in Earth Science through Music, Art and Science
NASA Astrophysics Data System (ADS)
Casasanto, V.; Rock, J.; Hallowell, R.; Williams, K.; Angell, D.; Beautiful Earth
2011-12-01
The Beautiful Earth program, awarded by NASA's Competitive Opportunities in Education and Public Outreach for Earth and Space Science (EPOESS), is a live multi-media performance at partner science centers linked with hands-on workshops featuring Earth scientists and Native American experts. It aims to inspire, engage and educate diverse students in Earth science through an experience of viewing the Earth from space as one interconnected whole, as seen through the eyes of astronauts. The informal education program is an outgrowth of Kenji Williams' BELLA GAIA Living Atlas Experience (www.bellagaia.com) performed across the globe since 2008 and following the successful Earth Day education events in 2009 and 2010 with NASA's DLN (Digital Learning Network) http://tinyurl.com/2ckg2rh. Beautiful Earth takes a new approach to teaching, by combining live music and data visualizations, Earth Science with indigenous perspectives of the Earth, and hands-on interactive workshops. The program will utilize the emotionally inspiring multi-media show as a springboard to inspire participants to learn more about Earth systems and science. Native Earth Ways (NEW) will be the first module in a series of three "Beautiful Earth" experiences, that will launch the national tour at a presentation in October 2011 at the MOST science museum in collaboration with the Onandaga Nation School in Syracuse, New York. The NEW Module will include Native American experts to explain how they study and conserve the Earth in their own unique ways along with hands-on activities to convey the science which was seen in the show. In this first pilot run of the module, 110 K-12 students with faculty and family members of the Onandaga Nations School will take part. The goal of the program is to introduce Native American students to Earth Sciences and STEM careers, and encourage them to study these sciences and become responsible stewards of the Earth. The second workshop presented to participants will be the Spaceship Earth Scientist (SES) Module, featuring an Earth Scientist expert discussing the science seen in the presentation. Hands-on activities such as sea ice melting simulations will be held with participants. Results from these first pilot education experiences will be presented at the 2011 AGU.
Vander Meulen, Kirk A.; Saecker, Ruth M.; Record, M. Thomas
2008-01-01
To characterize driving forces and driven processes in formation of a large-interface, wrapped protein-DNA complex analogous to the nucleosome, we have investigated the thermodynamics of binding the 34 bp H′ DNA sequence to the E. coli DNA-remodeling protein Integration Host Factor (IHF). Isothermal titration calorimetry (ITC) and fluorescence resonance energy transfer (FRET) are applied to determine effects of salt concentration (KCl, KF, KGlutamate (KGlu)), and of the excluded solute glycine betaine, on the binding thermodynamics at 20°C. Both the binding constant Kobs and enthalpy ΔH°obs depend strongly on [salt] and anion identity. Formation of the wrapped complex is enthalpy-driven, especially at low [salt] (e.g. ΔH°obs = −20.2 kcal · mol−1 in 0.04 M KCl). ΔH°obs increases linearly with [salt] with a slope (dΔH°obs/d[salt]) which is much larger in KCl (38 ± 3 kcal · mol−1M−1) than in KF or KGlu (average 11 ± 2 kcal · mol−1M−1). At 0.33 M [salt], Kobs is approximately 30-fold larger in KGlu or KF than in KCl, and the [salt] derivative SKobs = dlnKobs/dln[salt] is almost twice as large in magnitude in KCl (−8.8 ± 0.7) as in KF or KGlu (average −4.7 ± 0.6). A novel analysis of the large effects of anion identity on Kobs, SKobs and on ΔH°obs dissects coulombic, Hofmeister and osmotic contributions to these quantities. This analysis attributes anion-specific differences in Kobs, SKobs and ΔH°obs to (i) displacement of a large number of waters of hydration (estimated to be 1.0 (± 0.2) × 103) from the 5340 Å2 of IHF and H′ DNA surface buried in complex formation, and (ii) significant local exclusion of F− and Glu− from this hydration water, relative to the situation with Cl−, which we propose is randomly distributed. To quantify net water release from anionic surface (22% of the surface buried in complexation, mostly from DNA phosphates), we determined the stabilizing effect of glycine betaine (GB) on Kobs: dlnKobs/d[GB] = 2.7 ± 0.4 at constant KCl activity, indicating the net release of 150 H2O from anionic surface. PMID:18237740
Ghan, Steven; Wang, Minghuai; Zhang, Shipeng; Ferrachat, Sylvaine; Gettelman, Andrew; Griesfeller, Jan; Kipling, Zak; Lohmann, Ulrike; Morrison, Hugh; Neubauer, David; Partridge, Daniel G; Stier, Philip; Takemura, Toshihiko; Wang, Hailong; Zhang, Kai
2016-05-24
A large number of processes are involved in the chain from emissions of aerosol precursor gases and primary particles to impacts on cloud radiative forcing. Those processes are manifest in a number of relationships that can be expressed as factors dlnX/dlnY driving aerosol effects on cloud radiative forcing. These factors include the relationships between cloud condensation nuclei (CCN) concentration and emissions, droplet number and CCN concentration, cloud fraction and droplet number, cloud optical depth and droplet number, and cloud radiative forcing and cloud optical depth. The relationship between cloud optical depth and droplet number can be further decomposed into the sum of two terms involving the relationship of droplet effective radius and cloud liquid water path with droplet number. These relationships can be constrained using observations of recent spatial and temporal variability of these quantities. However, we are most interested in the radiative forcing since the preindustrial era. Because few relevant measurements are available from that era, relationships from recent variability have been assumed to be applicable to the preindustrial to present-day change. Our analysis of Aerosol Comparisons between Observations and Models (AeroCom) model simulations suggests that estimates of relationships from recent variability are poor constraints on relationships from anthropogenic change for some terms, with even the sign of some relationships differing in many regions. Proxies connecting recent spatial/temporal variability to anthropogenic change, or sustained measurements in regions where emissions have changed, are needed to constrain estimates of anthropogenic aerosol impacts on cloud radiative forcing.
Helium experiments on Alcator C-Mod in support of ITER early operations
Kessel, C. E.; Wolfe, S. M.; Reinke, M. L.; ...
2018-03-13
Helium majority experiments on Alcator C-Mod were performed to compare with deuterium discharges, and inform ITER early operations. ELMy H-modes were produced with a special plasma shape at B T = 5.3 T, I P = 0.9 MA, at q 95 ~ 3.8. The He fraction ranged over, n He,L/n L = 0.2–0.4, with n D,L/n L = 0.15–0.26, compared to D plasmas with n D,L/n L = 0.85–0.97. The power to enter the H-mode in He was found to be greater than ~2 times that for D discharges, in the low density region <1.4 × 10 20/m 3. However, it appears to follow the D threshold for higher densities. The stored energies in the He discharges were about 80% of those in D, and about 40% higher net power was required to sustain them compared to D. Global particle confinement times for tungsten ofmore » $$\\tau _{{\\rm W}}^{{\\rm *}}$$ /τ E ~ 4 were obtained with ELMy H-modes in He, however accumulation occurred when the ELMs were irregular and infrequent. The electron temperatures and densities in the pedestal were similar between D and He discharges, and the ΔT e/T e and Δn e/n e values were similar or larger in He than D. The higher net power required to access the H-mode, and sustain it in flattop, for He discharges in C-Mod, imply some limitations for He operation in ITER.« less
Ghan, Steven; Wang, Minghuai; Zhang, Shipeng; Ferrachat, Sylvaine; Gettelman, Andrew; Griesfeller, Jan; Kipling, Zak; Lohmann, Ulrike; Morrison, Hugh; Neubauer, David; Partridge, Daniel G.; Stier, Philip; Takemura, Toshihiko; Wang, Hailong; Zhang, Kai
2016-01-01
A large number of processes are involved in the chain from emissions of aerosol precursor gases and primary particles to impacts on cloud radiative forcing. Those processes are manifest in a number of relationships that can be expressed as factors dlnX/dlnY driving aerosol effects on cloud radiative forcing. These factors include the relationships between cloud condensation nuclei (CCN) concentration and emissions, droplet number and CCN concentration, cloud fraction and droplet number, cloud optical depth and droplet number, and cloud radiative forcing and cloud optical depth. The relationship between cloud optical depth and droplet number can be further decomposed into the sum of two terms involving the relationship of droplet effective radius and cloud liquid water path with droplet number. These relationships can be constrained using observations of recent spatial and temporal variability of these quantities. However, we are most interested in the radiative forcing since the preindustrial era. Because few relevant measurements are available from that era, relationships from recent variability have been assumed to be applicable to the preindustrial to present-day change. Our analysis of Aerosol Comparisons between Observations and Models (AeroCom) model simulations suggests that estimates of relationships from recent variability are poor constraints on relationships from anthropogenic change for some terms, with even the sign of some relationships differing in many regions. Proxies connecting recent spatial/temporal variability to anthropogenic change, or sustained measurements in regions where emissions have changed, are needed to constrain estimates of anthropogenic aerosol impacts on cloud radiative forcing. PMID:26921324
Helium experiments on Alcator C-Mod in support of ITER early operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kessel, C. E.; Wolfe, S. M.; Reinke, M. L.
Helium majority experiments on Alcator C-Mod were performed to compare with deuterium discharges, and inform ITER early operations. ELMy H-modes were produced with a special plasma shape at B T = 5.3 T, I P = 0.9 MA, at q 95 ~ 3.8. The He fraction ranged over, n He,L/n L = 0.2–0.4, with n D,L/n L = 0.15–0.26, compared to D plasmas with n D,L/n L = 0.85–0.97. The power to enter the H-mode in He was found to be greater than ~2 times that for D discharges, in the low density region <1.4 × 10 20/m 3. However, it appears to follow the D threshold for higher densities. The stored energies in the He discharges were about 80% of those in D, and about 40% higher net power was required to sustain them compared to D. Global particle confinement times for tungsten ofmore » $$\\tau _{{\\rm W}}^{{\\rm *}}$$ /τ E ~ 4 were obtained with ELMy H-modes in He, however accumulation occurred when the ELMs were irregular and infrequent. The electron temperatures and densities in the pedestal were similar between D and He discharges, and the ΔT e/T e and Δn e/n e values were similar or larger in He than D. The higher net power required to access the H-mode, and sustain it in flattop, for He discharges in C-Mod, imply some limitations for He operation in ITER.« less
NASA Astrophysics Data System (ADS)
Song, Teh-Ru Alex; Tanaka, Satoru; Takeuchi, Nozomu
2010-05-01
P wave traveling through the Earth's core typically includes three distinct phases, PKPdf (or PKIKP), PKPbc and PKPab and these waves have been frequently analyzed to study the structure of the outer-core and inner-core. It is well known that PKPab waveform suffers a 90-degree phase shift when encountering an internal acoustics in the outer-core and it is theoretically equivalent to Hilbert-transformed PKPbc (or PKPdf) waveform. Here, we report a dataset from an intermediate-depth earthquake in Vanuatu Islands recorded by a PASSCAL broadband array in Cameroon, West Africa. Two anomalous features stand out in this record section. First, in the period of a few seconds and longer, most PKPab waveforms recorded by this array are anomalous in a way that they do not display a 90-degree phase shift that is observed in other stations in Europe. Secondly, in the high frequency band of 0.5 Hz to 2 Hz, two large arrivals separated by about 3.4 seconds are observed in the time window of PKPab phase and they are often absent in the time window of PKPdf and PKPbc phases. In addition, the second arrival seems suffer some degree of phase shift relative to the first arrival. We examine several other record sections from nearby events in Tonga and they do not show such an anomalous feature, suggesting that receiver structures are probably not the cause of this observation. Note that the take-off angle of PKPab is typically 9-12 degrees shallower than that of PKPdf and PKPbc and it is possible that near-source scattering from the slab may account for such an anomalous feature. We make Hilbert transform of P waveforms recorded at shorter range of less than 90 degrees and compare them with these anomalous PKPab waveforms. However, these Hilbert-transformed P wave show a clear 90-degree phase shift relative to PKPdf and PKPbc and they are different from PKPab waveforms, despite a difference in take-off angles of less than 5 degrees in some cases. It appears that near-source scatterings and receiver-side structure do not play a predominant role in generating these anomalous PKPab waveforms. We then look into structural anomaly near the core-mantle-boundary (CMB) since PKPab grazes the CMB at a very shallow angle and it can effectively interact with it and possibly produce anomalous PKPab waveforms. We first explore 1-D model space by introducing velocity anomaly directly above the CMB, with a velocity perturbation up to a few tens of percents in S wave velocity and P wave velocity. We calculate synthetics up to 2 Hz by Direct Solution Method (DSM) and Reflectivity Method to examine waveform anomaly at long period band (0.01-0.2 Hz) as well as short-period band (0.5-2 Hz). Our preliminary result indicates that the model with a thin (~ 15 km) ultra-low velocity zone (ULVZ, 30% reduction in P wave and S wave velocity) is capable of reproducing characteristics of these anomalous PKPab waveforms at both frequency bands. The pierce points of PKPab in the source side at CMB are near the southeast Indian Ocean where S wave velocity is only slightly faster than PREM. On the other hand, the pierce points in the receiver side are at the eastern edge of the African Large Low Shear Velocity Province (LLSVP). One interesting feature of our ULVZ model is that dlnVs/dlnVp is about 1, which is different from most ULVZ models where dlnVs/dlnVp is about 3.
Flaschberger, Edith; Gugglberger, Lisa; Dietscher, Christina
2013-12-01
To change a school into a health-promoting organization, organizational learning is required. The evaluation of an Austrian regional health-promoting schools network provides qualitative data on the views of the different stakeholders on learning in this network (steering group, network coordinator and representatives of the network schools; n = 26). Through thematic analysis and deep-structure analyses, the following three forms of learning in the network were identified: (A) individual learning through input offered by the network coordination, (B) individual learning between the network schools, i.e. through exchange between the representatives of different schools and (C) learning within the participating schools, i.e. organizational learning. Learning between (B) or within the participating schools (C) seems to be rare in the network; concepts of individual teacher learning are prevalent. Difficulties detected relating to the transfer of information from the network to the member schools included barriers to organizational learning such as the lack of collaboration, coordination and communication in the network schools, which might be effects of the school system in which the observed network is located. To ensure connectivity of the information offered by the network, more emphasis should be put on linking health promotion to school development and the core processes of schools.
2002-05-20
this transition will have on the CSEPP communities using a risk-based simulation suite. 34 Arms Control & Proliferation WG-3 Chir To Mcian USDprmn f3tt...conjunction with the Army Office of the Surgeon General (OTSG). 166 Measures of Effectiveness WG-24 Chir MA ar . zl,UM The following abstracts are...the effects on resources and budgets that result from major force, support and infrastructure changes. 193 Decision Analysis WG-28 Chir Gwe F.Dln,J A
Official Guard and Reserve Manpower Strengths and Statistics. FY 1988.
1987-12-01
i- f) 3 0) . 4 m r, 303 ri o0- . ’A It0Nr.0 0 m &0 m. omO4r w or-W’F4 04.nr4 m0 0.Mr- - .4 r,M. ., mru -4.a r - * DLn n 0A m-.0 IT 0- 0 r, T N. 0 0m. 0...sonhhhhhhhhhhhl 1111.25 111114 6~IM~ M’CROCo’Y ILSOLUTION TESI CHAi NATIONAl SURL’AU Jf ANUAPDS, 164 % % AI %~ -i .oo o 0 - .0 N. ..a iN -S r- O 0 4
A Collaborative Learning Network Approach to Improvement: The CUSP Learning Network.
Weaver, Sallie J; Lofthus, Jennifer; Sawyer, Melinda; Greer, Lee; Opett, Kristin; Reynolds, Catherine; Wyskiel, Rhonda; Peditto, Stephanie; Pronovost, Peter J
2015-04-01
Collaborative improvement networks draw on the science of collaborative organizational learning and communities of practice to facilitate peer-to-peer learning, coaching, and local adaption. Although significant improvements in patient safety and quality have been achieved through collaborative methods, insight regarding how collaborative networks are used by members is needed. Improvement Strategy: The Comprehensive Unit-based Safety Program (CUSP) Learning Network is a multi-institutional collaborative network that is designed to facilitate peer-to-peer learning and coaching specifically related to CUSP. Member organizations implement all or part of the CUSP methodology to improve organizational safety culture, patient safety, and care quality. Qualitative case studies developed by participating members examine the impact of network participation across three levels of analysis (unit, hospital, health system). In addition, results of a satisfaction survey designed to evaluate member experiences were collected to inform network development. Common themes across case studies suggest that members found value in collaborative learning and sharing strategies across organizational boundaries related to a specific improvement strategy. The CUSP Learning Network is an example of network-based collaborative learning in action. Although this learning network focuses on a particular improvement methodology-CUSP-there is clear potential for member-driven learning networks to grow around other methods or topic areas. Such collaborative learning networks may offer a way to develop an infrastructure for longer-term support of improvement efforts and to more quickly diffuse creative sustainment strategies.
Helium experiments on Alcator C-Mod in support of ITER early operations
NASA Astrophysics Data System (ADS)
Kessel, C. E.; Wolfe, S. M.; Reinke, M. L.; Hughes, J. W.; Lin, Y.; Wukitch, S. J.; Baek, S. G.; Bonoli, P. T.; Chilenski, M.; Diallo, A.; the Alcator C-Mod Team
2018-05-01
Helium majority experiments on Alcator C-Mod were performed to compare with deuterium discharges, and inform ITER early operations. ELMy H-modes were produced with a special plasma shape at B T = 5.3 T, I P = 0.9 MA, at q 95 ~ 3.8. The He fraction ranged over, n He,L/n L = 0.2-0.4, with n D,L/n L = 0.15-0.26, compared to D plasmas with n D,L/n L = 0.85-0.97. The power to enter the H-mode in He was found to be greater than ~2 times that for D discharges, in the low density region <1.4 × 1020/m3. However, it appears to follow the D threshold for higher densities. The stored energies in the He discharges were about 80% of those in D, and about 40% higher net power was required to sustain them compared to D. Global particle confinement times for tungsten of τ W* /τ E ~ 4 were obtained with ELMy H-modes in He, however accumulation occurred when the ELMs were irregular and infrequent. The electron temperatures and densities in the pedestal were similar between D and He discharges, and the ΔT e/T e and Δn e/n e values were similar or larger in He than D. The higher net power required to access the H-mode, and sustain it in flattop, for He discharges in C-Mod, imply some limitations for He operation in ITER.
NASA Astrophysics Data System (ADS)
Garrett, T. J.; Alva, S.; Glenn, I. B.; Krueger, S. K.
2015-12-01
There are two possible approaches for parameterizing sub-grid cloud dynamics in a coarser grid model. The most common is to use a fine scale model to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to parameterize these behaviors cloud state for the coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical mechanics. This approach avoids any requirement to resolve time-dependent processes in order to arrive at a suitable solution. The second approach is widely used elsewhere in the atmospheric sciences: for example the Planck function for blackbody radiation is derived this way, where no mention is made of the complexities of modeling a large ensemble of time-dependent radiation-dipole interactions in order to obtain the "grid-scale" spectrum of thermal emission by the blackbody as a whole. We find that this statistical approach may be equally suitable for modeling convective clouds. Specifically, we make the physical argument that the dissipation of buoyant energy in convective clouds is done through mixing across a cloud perimeter. From thermodynamic reasoning, one might then anticipate that vertically stacked isentropic surfaces are characterized by a power law dlnN/dlnP = -1, where N(P) is the number clouds of perimeter P. In a Giga-LES simulation of convective clouds within a 100 km square domain we find that such a power law does appear to characterize simulated cloud perimeters along isentropes, provided a sufficient cloudy sample. The suggestion is that it may be possible to parameterize certain important aspects of cloud state without appealing to computationally expensive dynamic simulations.
Learning Analytics for Networked Learning Models
ERIC Educational Resources Information Center
Joksimovic, Srecko; Hatala, Marek; Gaševic, Dragan
2014-01-01
Teaching and learning in networked settings has attracted significant attention recently. The central topic of networked learning research is human-human and human-information interactions occurring within a networked learning environment. The nature of these interactions is highly complex and usually requires a multi-dimensional approach to…
2006-07-01
was first studied by Krivoglaz and Ryaboshapka [1], Krivoglaz, et al . [2], and Wilkens [11, 20, 21]. More recently, Wu, et al . [22, 23] obtained the... al . [22] with an accuracy of 3 pct. over the range 0.1 _< M _< 10. The form of this function was: f(M) =aln(M+l)+bln2 (M+1)+cln3 (M+l)+dln4 (M+l), (2...Williams-Hall technique proposed by Ungdr, et al . [24]. Due to the limited number of peaks available for analysis, it was not possible to
López-Barroso, Diana; Ripollés, Pablo; Marco-Pallarés, Josep; Mohammadi, Bahram; Münte, Thomas F; Bachoud-Lévi, Anne-Catherine; Rodriguez-Fornells, Antoni; de Diego-Balaguer, Ruth
2015-04-15
Although neuroimaging studies using standard subtraction-based analysis from functional magnetic resonance imaging (fMRI) have suggested that frontal and temporal regions are involved in word learning from fluent speech, the possible contribution of different brain networks during this type of learning is still largely unknown. Indeed, univariate fMRI analyses cannot identify the full extent of distributed networks that are engaged by a complex task such as word learning. Here we used Independent Component Analysis (ICA) to characterize the different brain networks subserving word learning from an artificial language speech stream. Results were replicated in a second cohort of participants with a different linguistic background. Four spatially independent networks were associated with the task in both cohorts: (i) a dorsal Auditory-Premotor network; (ii) a dorsal Sensory-Motor network; (iii) a dorsal Fronto-Parietal network; and (iv) a ventral Fronto-Temporal network. The level of engagement of these networks varied through the learning period with only the dorsal Auditory-Premotor network being engaged across all blocks. In addition, the connectivity strength of this network in the second block of the learning phase correlated with the individual variability in word learning performance. These findings suggest that: (i) word learning relies on segregated connectivity patterns involving dorsal and ventral networks; and (ii) specifically, the dorsal auditory-premotor network connectivity strength is directly correlated with word learning performance. Copyright © 2015 Elsevier Inc. All rights reserved.
Robust Learning of High-dimensional Biological Networks with Bayesian Networks
NASA Astrophysics Data System (ADS)
Nägele, Andreas; Dejori, Mathäus; Stetter, Martin
Structure learning of Bayesian networks applied to gene expression data has become a potentially useful method to estimate interactions between genes. However, the NP-hardness of Bayesian network structure learning renders the reconstruction of the full genetic network with thousands of genes unfeasible. Consequently, the maximal network size is usually restricted dramatically to a small set of genes (corresponding with variables in the Bayesian network). Although this feature reduction step makes structure learning computationally tractable, on the downside, the learned structure might be adversely affected due to the introduction of missing genes. Additionally, gene expression data are usually very sparse with respect to the number of samples, i.e., the number of genes is much greater than the number of different observations. Given these problems, learning robust network features from microarray data is a challenging task. This chapter presents several approaches tackling the robustness issue in order to obtain a more reliable estimation of learned network features.
ERIC Educational Resources Information Center
Lin, Jian-Wei; Huang, Hsieh-Hong; Chuang, Yuh-Shy
2015-01-01
An e-learning environment that supports social network awareness (SNA) is a highly effective means of increasing peer interaction and assisting student learning by raising awareness of social and learning contexts of peers. Network centrality profoundly impacts student learning in an SNA-related e-learning environment. Additionally,…
Teachers' Motives for Learning in Networks: Costs, Rewards and Community Interest
ERIC Educational Resources Information Center
van den Beemt, Antoine; Ketelaar, Evelien; Diepstraten, Isabelle; de Laat, Maarten
2018-01-01
Background: This paper discusses teachers' perspectives on learning networks and their motives for participating in these networks. Although it is widely held that teachers' learning may be developed through learning networks, not all teachers participate in such networks. Purpose: The theme of reciprocity, central to studies in the area of…
Up the ANTe: Understanding Entrepreneurial Leadership Learning through Actor-Network Theory
ERIC Educational Resources Information Center
Smith, Sue; Kempster, Steve; Barnes, Stewart
2017-01-01
This article explores the role of educators in supporting the development of entrepreneurial leadership learning by creating peer learning networks of owner-managers of small businesses. Using actor-network theory, the authors think through the process of constructing and maintaining a peer learning network (conceived of as an actor-network) and…
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics
Sinapayen, Lana; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle “Learning by Stimulation Avoidance” (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system. PMID:28158309
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.
Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.
Modular, Hierarchical Learning By Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Baldi, Pierre F.; Toomarian, Nikzad
1996-01-01
Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.
Blending Formal and Informal Learning Networks for Online Learning
ERIC Educational Resources Information Center
Czerkawski, Betül C.
2016-01-01
With the emergence of social software and the advance of web-based technologies, online learning networks provide invaluable opportunities for learning, whether formal or informal. Unlike top-down, instructor-centered, and carefully planned formal learning settings, informal learning networks offer more bottom-up, student-centered participatory…
Constructing of Research-Oriented Learning Mode Based on Network Environment
ERIC Educational Resources Information Center
Wang, Ying; Li, Bing; Xie, Bai-zhi
2007-01-01
Research-oriented learning mode that based on network is significant to cultivate comprehensive-developing innovative person with network teaching in education for all-around development. This paper establishes a research-oriented learning mode by aiming at the problems existing in research-oriented learning based on network environment, and…
Maximum entropy methods for extracting the learned features of deep neural networks.
Finnegan, Alex; Song, Jun S
2017-10-01
New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.
Gyurko, David M; Soti, Csaba; Stetak, Attila; Csermely, Peter
2014-05-01
During the last decade, network approaches became a powerful tool to describe protein structure and dynamics. Here, we describe first the protein structure networks of molecular chaperones, then characterize chaperone containing sub-networks of interactomes called as chaperone-networks or chaperomes. We review the role of molecular chaperones in short-term adaptation of cellular networks in response to stress, and in long-term adaptation discussing their putative functions in the regulation of evolvability. We provide a general overview of possible network mechanisms of adaptation, learning and memory formation. We propose that changes of network rigidity play a key role in learning and memory formation processes. Flexible network topology provides ' learning-competent' state. Here, networks may have much less modular boundaries than locally rigid, highly modular networks, where the learnt information has already been consolidated in a memory formation process. Since modular boundaries are efficient filters of information, in the 'learning-competent' state information filtering may be much smaller, than after memory formation. This mechanism restricts high information transfer to the 'learning competent' state. After memory formation, modular boundary-induced segregation and information filtering protect the stored information. The flexible networks of young organisms are generally in a 'learning competent' state. On the contrary, locally rigid networks of old organisms have lost their 'learning competent' state, but store and protect their learnt information efficiently. We anticipate that the above mechanism may operate at the level of both protein-protein interaction and neuronal networks.
Accelerating Learning By Neural Networks
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad; Barhen, Jacob
1992-01-01
Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.
Network congestion control algorithm based on Actor-Critic reinforcement learning model
NASA Astrophysics Data System (ADS)
Xu, Tao; Gong, Lina; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen
2018-04-01
Aiming at the network congestion control problem, a congestion control algorithm based on Actor-Critic reinforcement learning model is designed. Through the genetic algorithm in the congestion control strategy, the network congestion problems can be better found and prevented. According to Actor-Critic reinforcement learning, the simulation experiment of network congestion control algorithm is designed. The simulation experiments verify that the AQM controller can predict the dynamic characteristics of the network system. Moreover, the learning strategy is adopted to optimize the network performance, and the dropping probability of packets is adaptively adjusted so as to improve the network performance and avoid congestion. Based on the above finding, it is concluded that the network congestion control algorithm based on Actor-Critic reinforcement learning model can effectively avoid the occurrence of TCP network congestion.
Cooperative Learning for Distributed In-Network Traffic Classification
NASA Astrophysics Data System (ADS)
Joseph, S. B.; Loo, H. R.; Ismail, I.; Andromeda, T.; Marsono, M. N.
2017-04-01
Inspired by the concept of autonomic distributed/decentralized network management schemes, we consider the issue of information exchange among distributed network nodes to network performance and promote scalability for in-network monitoring. In this paper, we propose a cooperative learning algorithm for propagation and synchronization of network information among autonomic distributed network nodes for online traffic classification. The results show that network nodes with sharing capability perform better with a higher average accuracy of 89.21% (sharing data) and 88.37% (sharing clusters) compared to 88.06% for nodes without cooperative learning capability. The overall performance indicates that cooperative learning is promising for distributed in-network traffic classification.
node2vec: Scalable Feature Learning for Networks
Grover, Aditya; Leskovec, Jure
2016-01-01
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node’s network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks. PMID:27853626
An Investigation of Long Term Orbits About the Planet Mars Using a Dynamic Atmosphere Model
1989-12-01
O ( c IT1 I,5")) Figure 1.2: Ionosphere...20.1272 2TC - Met*t210 | ’K 40 - Ix cl o ~ /2-. o 6 230 220Z- \\AW - Ii Z10 - A €, c , , , , zoo- ’, I ,, O - %%" ISIO- %% 170- U Figure 1.4: Atmospheric...sini +’I-e2 R n ’ C ’. dk -_ -e2 R + h coti R - k P’yI-e 2 R (.1.1.4d) dt na- h na2it-e2 i haS O r)LN d121 IR (. .1. 4e) dt na2 sini il-e2 i dLN
Use of the Digamma Function in Statistical Astrophysics Distributions
NASA Astrophysics Data System (ADS)
Cahill, Michael
2017-06-01
Relaxed astrophysical statistical distributions may be constructed by using the inverse of a most probable energy distribution equation giving the energy ei of each particle in cell i in terms of the cell’s particle population Ni. The digamma mediated equation is A + Bei = Ψ(1+ Ni), where the constants A & B are Lagrange multipliers and Ψ is the digamma function given by Ψ(1+x) = dln(x!)/dx. Results are discussed for a Monatomic Ideal Gas, Atmospheres of Spherical Planets or Satellites and for Spherical Globular Clusters. These distributions are self-terminating even if other factors do not cause a cutoff. The examples are discussed classically but relativistic extensions are possible.
Peer Learning Network: Implementing and Sustaining Cooperative Learning by Teacher Collaboration
ERIC Educational Resources Information Center
Miquel, Ester; Duran, David
2017-01-01
This article describes an in-service teachers', staff-development model "Peer Learning Network" and presents results about its efficiency. "Peer Learning Network" promotes three levels of peer learning simultaneously (among pupils, teachers, and schools). It supports pairs of teachers from several schools, who are linked…
The Integration of Personal Learning Environments & Open Network Learning Environments
ERIC Educational Resources Information Center
Tu, Chih-Hsiung; Sujo-Montes, Laura; Yen, Cherng-Jyh; Chan, Junn-Yih; Blocher, Michael
2012-01-01
Learning management systems traditionally provide structures to guide online learners to achieve their learning goals. Web 2.0 technology empowers learners to create, share, and organize their personal learning environments in open network environments; and allows learners to engage in social networking and collaborating activities. Advanced…
ERIC Educational Resources Information Center
Cohen, Moshe; And Others
Electronic networks provide new opportunities to create functional learning environments which allow students in many different locations to carry out joint educational activities. A set of participant observation studies was conducted in the context of a cross-cultural, cross-language network called the Intercultural Learning Network in order to…
ERIC Educational Resources Information Center
Firdausiah Mansur, Andi Besse; Yusof, Norazah
2013-01-01
Clustering on Social Learning Network still not explored widely, especially when the network focuses on e-learning system. Any conventional methods are not really suitable for the e-learning data. SNA requires content analysis, which involves human intervention and need to be carried out manually. Some of the previous clustering techniques need…
GA-based fuzzy reinforcement learning for control of a magnetic bearing system.
Lin, C T; Jou, C P
2000-01-01
This paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task. The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system. The action network can be a normal neural network or a neural fuzzy network. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem. The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system.
Learning in a Network: A "Third Way" between School Learning and Workplace Learning?
ERIC Educational Resources Information Center
Bottrup, Pernille
2005-01-01
Purpose--The aim of this article is to examine network-based learning and discuss how participation in network can enhance organisational learning. Design/methodology/approach--In recent years, companies have increased their collaboration with other organisations, suppliers, customers, etc., in order to meet challenges from a globalised market.…
How Neural Networks Learn from Experience.
ERIC Educational Resources Information Center
Hinton, Geoffrey E.
1992-01-01
Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…
Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis
NASA Astrophysics Data System (ADS)
Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr
2017-10-01
Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.
Quantitative learning strategies based on word networks
NASA Astrophysics Data System (ADS)
Zhao, Yue-Tian-Yi; Jia, Zi-Yang; Tang, Yong; Xiong, Jason Jie; Zhang, Yi-Cheng
2018-02-01
Learning English requires a considerable effort, but the way that vocabulary is introduced in textbooks is not optimized for learning efficiency. With the increasing population of English learners, learning process optimization will have significant impact and improvement towards English learning and teaching. The recent developments of big data analysis and complex network science provide additional opportunities to design and further investigate the strategies in English learning. In this paper, quantitative English learning strategies based on word network and word usage information are proposed. The strategies integrate the words frequency with topological structural information. By analyzing the influence of connected learned words, the learning weights for the unlearned words and dynamically updating of the network are studied and analyzed. The results suggest that quantitative strategies significantly improve learning efficiency while maintaining effectiveness. Especially, the optimized-weight-first strategy and segmented strategies outperform other strategies. The results provide opportunities for researchers and practitioners to reconsider the way of English teaching and designing vocabularies quantitatively by balancing the efficiency and learning costs based on the word network.
A high-capacity model for one shot association learning in the brain
Einarsson, Hafsteinn; Lengler, Johannes; Steger, Angelika
2014-01-01
We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs. PMID:25426060
Adaptive categorization of ART networks in robot behavior learning using game-theoretic formulation.
Fung, Wai-keung; Liu, Yun-hui
2003-12-01
Adaptive Resonance Theory (ART) networks are employed in robot behavior learning. Two of the difficulties in online robot behavior learning, namely, (1) exponential memory increases with time, (2) difficulty for operators to specify learning tasks accuracy and control learning attention before learning. In order to remedy the aforementioned difficulties, an adaptive categorization mechanism is introduced in ART networks for perceptual and action patterns categorization in this paper. A game-theoretic formulation of adaptive categorization for ART networks is proposed for vigilance parameter adaptation for category size control on the categories formed. The proposed vigilance parameter update rule can help improving categorization performance in the aspect of category number stability and solve the problem of selecting initial vigilance parameter prior to pattern categorization in traditional ART networks. Behavior learning using physical robot is conducted to demonstrate the effectiveness of the proposed adaptive categorization mechanism in ART networks.
A high-capacity model for one shot association learning in the brain.
Einarsson, Hafsteinn; Lengler, Johannes; Steger, Angelika
2014-01-01
We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs.
The Structural Underpinnings of Policy Learning: A Classroom Policy Simulation
NASA Astrophysics Data System (ADS)
Bird, Stephen
This paper investigates the relationship between the centrality of individual actors in a social network structure and their policy learning performance. In a dynamic comparable to real-world policy networks, results from a classroom simulation demonstrate a strong relationship between centrality in social learning networks and grade performance. Previous research indicates that social network centrality should have a positive effect on learning in other contexts and this link is tested in a policy learning context. Second, the distinction between collaborative learning versus information diffusion processes in policy learning is examined. Third, frequency of interaction is analyzed to determine whether consistent, frequent tics have a greater impact on the learning process. Finally, the data arc analyzed to determine if the benefits of centrality have limitations or thresholds when benefits no longer accrue. These results demonstrate the importance of network structure, and support a collaborative conceptualization of the policy learning process.
How Are Television Networks Involved in Distance Learning?
ERIC Educational Resources Information Center
Bucher, Katherine
1996-01-01
Reviews the involvement of various television networks in distance learning, including public broadcasting stations, Cable in the Classroom, Arts and Entertainment Network, Black Entertainment Television, C-SPAN, CNN (Cable News Network), The Discovery Channel, The Learning Channel, Mind Extension University, The Weather Channel, National Teacher…
Rethinking the learning of belief network probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musick, R.
Belief networks are a powerful tool for knowledge discovery that provide concise, understandable probabilistic models of data. There are methods grounded in probability theory to incrementally update the relationships described by the belief network when new information is seen, to perform complex inferences over any set of variables in the data, to incorporate domain expertise and prior knowledge into the model, and to automatically learn the model from data. This paper concentrates on part of the belief network induction problem, that of learning the quantitative structure (the conditional probabilities), given the qualitative structure. In particular, the current practice of rotemore » learning the probabilities in belief networks can be significantly improved upon. We advance the idea of applying any learning algorithm to the task of conditional probability learning in belief networks, discuss potential benefits, and show results of applying neutral networks and other algorithms to a medium sized car insurance belief network. The results demonstrate from 10 to 100% improvements in model error rates over the current approaches.« less
Cascade Back-Propagation Learning in Neural Networks
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2003-01-01
The cascade back-propagation (CBP) algorithm is the basis of a conceptual design for accelerating learning in artificial neural networks. The neural networks would be implemented as analog very-large-scale integrated (VLSI) circuits, and circuits to implement the CBP algorithm would be fabricated on the same VLSI circuit chips with the neural networks. Heretofore, artificial neural networks have learned slowly because it has been necessary to train them via software, for lack of a good on-chip learning technique. The CBP algorithm is an on-chip technique that provides for continuous learning in real time. Artificial neural networks are trained by example: A network is presented with training inputs for which the correct outputs are known, and the algorithm strives to adjust the weights of synaptic connections in the network to make the actual outputs approach the correct outputs. The input data are generally divided into three parts. Two of the parts, called the "training" and "cross-validation" sets, respectively, must be such that the corresponding input/output pairs are known. During training, the cross-validation set enables verification of the status of the input-to-output transformation learned by the network to avoid over-learning. The third part of the data, termed the "test" set, consists of the inputs that are required to be transformed into outputs; this set may or may not include the training set and/or the cross-validation set. Proposed neural-network circuitry for on-chip learning would be divided into two distinct networks; one for training and one for validation. Both networks would share the same synaptic weights.
Learning in Artificial Neural Systems
NASA Technical Reports Server (NTRS)
Matheus, Christopher J.; Hohensee, William E.
1987-01-01
This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined.
NASA Astrophysics Data System (ADS)
Beghein, Caroline; Trampert, Jeannot
2004-01-01
The presence of radial anisotropy in the upper mantle, transition zone and top of the lower mantle is investigated by applying a model space search technique to Rayleigh and Love wave phase velocity models. Probability density functions are obtained independently for S-wave anisotropy, P-wave anisotropy, intermediate parameter η, Vp, Vs and density anomalies. The likelihoods for P-wave and S-wave anisotropy beneath continents cannot be explained by a dry olivine-rich upper mantle at depths larger than 220 km. Indeed, while shear-wave anisotropy tends to disappear below 220 km depth in continental areas, P-wave anisotropy is still present but its sign changes compared to the uppermost mantle. This could be due to an increase with depth of the amount of pyroxene relative to olivine in these regions, although the presence of water, partial melt or a change in the deformation mechanism cannot be ruled out as yet. A similar observation is made for old oceans, but not for young ones where VSH> VSV appears likely down to 670 km depth and VPH> VPV down to 400 km depth. The change of sign in P-wave anisotropy seems to be qualitatively correlated with the presence of the Lehmann discontinuity, generally observed beneath continents and some oceans but not beneath ridges. Parameter η shows a similar age-related depth pattern as shear-wave anisotropy in the uppermost mantle and it undergoes the same change of sign as P-wave anisotropy at 220 km depth. The ratio between dln Vs and dln Vp suggests that a chemical component is needed to explain the anomalies in most places at depths greater than 220 km. More tests are needed to infer the robustness of the results for density, but they do not affect the results for anisotropy.
A PRECISE CLUSTER MASS PROFILE AVERAGED FROM THE HIGHEST-QUALITY LENSING DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umetsu, Keiichi; Broadhurst, Tom; Zitrin, Adi
2011-09-01
We outline our methods for obtaining high-precision mass profiles, combining independent weak-lensing distortion, magnification, and strong-lensing measurements. For massive clusters, the strong- and weak-lensing regimes contribute equal logarithmic coverage of the radial profile. The utility of high-quality data is limited by the cosmic noise from large-scale structure along the line of sight. This noise is overcome when stacking clusters, as too are the effects of cluster asphericity and substructure, permitting a stringent test of theoretical models. We derive a mean radial mass profile of four similar mass clusters of high-quality Hubble Space Telescope and Subaru images, in the range Rmore » = 40-2800 kpc h {sup -1}, where the inner radial boundary is sufficiently large to avoid smoothing from miscentering effects. The stacked mass profile is detected at 58{sigma} significance over the entire radial range, with the contribution from the cosmic noise included. We show that the projected mass profile has a continuously steepening gradient out to beyond the virial radius, in remarkably good agreement with the standard Navarro-Frenk-White form predicted for the family of cold dark matter (CDM) dominated halos in gravitational equilibrium. The central slope is constrained to lie in the range, -dln {rho}/dln r = 0.89{sup +0.27}{sub -0.39}. The mean concentration is c{sub vir} = 7.68{sup +0.42}{sub -0.40} (at M{sub vir} = 1.54{sup +0.11}{sub -0.10} x 10{sup 15} M{sub sun} h {sup -1}), which is high for relaxed, high-mass clusters, but consistent with {Lambda}CDM when a sizable projection bias estimated from N-body simulations is considered. This possible tension will be more definitively explored with new cluster surveys, such as CLASH, LoCuSS, Subaru Hyper Suprime-Cam, and XXM-XXL, to construct the c{sub vir}-M{sub vir} relation over a wider mass range.« less
Wilson, Kirsty L; Xiang, Sue D; Plebanski, Magdalena
2015-01-01
The development of practical and flexible vaccines to target liver stage malaria parasites would benefit from an ability to induce high levels of CD8 T cells to minimal peptide epitopes. Herein we compare different adjuvant and carrier systems in a murine model for induction of interferon gamma (IFN-γ) producing CD8 T cells to the minimal immuno-dominant peptide epitope from the circumsporozoite protein (CSP) of Plasmodium berghei, pb9 (SYIPSAEKI, referred to as KI). Two pro-inflammatory adjuvants, Montanide and Poly I:C, and a non-classical, non-inflammatory nanoparticle based carrier (polystyrene nanoparticles, PSNPs), were compared side-by-side for their ability to induce potentially protective CD8 T cell responses after two immunizations. KI in Montanide (Montanide + KI) or covalently conjugated to PSNPs (PSNPs-KI) induced such high responses, whereas adjuvanting with Poly I:C or PSNPs without conjugation was ineffective. This result was consistent with an observed induction of an immunosuppressed environment by Poly I:C in the draining lymph node (dLN) 48 h post injection, which was reflected by increased frequencies of myeloid derived suppressor cells (MDSCs) and a proportion of inflammation reactive regulatory T cells (Treg) expressing the tumor necrosis factor receptor 2 (TNFR2), as well as decreased dendritic cell (DC) maturation. The other inflammatory adjuvant, Montanide, also promoted proportional increases in the TNFR2(+) Treg subpopulation, but not MDSCs, in the dLN. By contrast, injection with non-inflammatory PSNPs did not cause these changes. Induction of high CD8 T cell responses, using minimal peptide epitopes, can be achieved by non-inflammatory carrier nanoparticles, which in contrast to some conventional inflammatory adjuvants, do not expand either MDSCs or inflammation reactive Tregs at the site of priming.
ERIC Educational Resources Information Center
de Laat, Maarten; Lally, Vic; Lipponen, Lasse; Simons, Robert-Jan
2007-01-01
The focus of this study is to explore the advances that Social Network Analysis (SNA) can bring, in combination with other methods, when studying Networked Learning/Computer-Supported Collaborative Learning (NL/CSCL). We present a general overview of how SNA is applied in NL/CSCL research; we then go on to illustrate how this research method can…
Sea ice classification using fast learning neural networks
NASA Technical Reports Server (NTRS)
Dawson, M. S.; Fung, A. K.; Manry, M. T.
1992-01-01
A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.
Connectivism and Information Literacy: Moving from Learning Theory to Pedagogical Practice
ERIC Educational Resources Information Center
Transue, Beth M.
2013-01-01
Connectivism is an emerging learning theory positing that knowledge comprises networked relationships and that learning comprises the ability to successfully navigate through these networks. Successful pedagogical strategies involve the instructor helping students to identify, navigate, and evaluate information from their learning networks. Many…
Comparison between extreme learning machine and wavelet neural networks in data classification
NASA Astrophysics Data System (ADS)
Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri
2017-03-01
Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.
Three learning phases for radial-basis-function networks.
Schwenker, F; Kestler, H A; Palm, G
2001-05-01
In this paper, learning algorithms for radial basis function (RBF) networks are discussed. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initialization of the MLP's parameters, an RBF network may be trained in many different ways. We categorize these RBF training methods into one-, two-, and three-phase learning schemes. Two-phase RBF learning is a very common learning scheme. The two layers of an RBF network are learnt separately; first the RBF layer is trained, including the adaptation of centers and scaling parameters, and then the weights of the output layer are adapted. RBF centers may be trained by clustering, vector quantization and classification tree algorithms, and the output layer by supervised learning (through gradient descent or pseudo inverse solution). Results from numerical experiments of RBF classifiers trained by two-phase learning are presented in three completely different pattern recognition applications: (a) the classification of 3D visual objects; (b) the recognition hand-written digits (2D objects); and (c) the categorization of high-resolution electrocardiograms given as a time series (ID objects) and as a set of features extracted from these time series. In these applications, it can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three-phase learning in RBF networks. A practical advantage of two- and three-phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vector (SV) learning in RBF networks is a different learning approach. SV learning can be considered, in this context of learning, as a special type of one-phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including k-nearest-neighbor, learning vector quantization and RBF classifiers trained through two-phase, three-phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three-phase learning are superior to the results of two-phase learning, but SV learning often leads to complex network structures, since the number of support vectors is not a small fraction of the total number of data points.
Thermodynamic efficiency of learning a rule in neural networks
NASA Astrophysics Data System (ADS)
Goldt, Sebastian; Seifert, Udo
2017-11-01
Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.
Disseminating Innovations in Teaching Value-Based Care Through an Online Learning Network.
Gupta, Reshma; Shah, Neel T; Moriates, Christopher; Wallingford, September; Arora, Vineet M
2017-08-01
A national imperative to provide value-based care requires new strategies to teach clinicians about high-value care. We developed a virtual online learning network aimed at disseminating emerging strategies in teaching value-based care. The online Teaching Value in Health Care Learning Network includes monthly webinars that feature selected innovators, online discussion forums, and a repository for sharing tools. The learning network comprises clinician-educators and health system leaders across North America. We conducted a cross-sectional online survey of all webinar presenters and the active members of the network, and we assessed program feasibility. Six months after the program launched, there were 277 learning community members in 22 US states. Of the 74 active members, 50 (68%) completed the evaluation. Active members represented independently practicing physicians and trainees in 7 specialties, nurses, educators, and health system leaders. Nearly all speakers reported that the learning network provided them with a unique opportunity to connect with a different audience and achieve greater recognition for their work. Of the members who were active in the learning network, most reported that strategies gleaned from the network were helpful, and some adopted or adapted these innovations at their home institutions. One year after the program launched, the learning network had grown to 364 total members. The learning network helped participants share and implement innovations to promote high-value care. The model can help disseminate innovations in emerging areas of health care transformation, and is sustainable without ongoing support after a period of start-up funding.
Behavioral Profiling of Scada Network Traffic Using Machine Learning Algorithms
2014-03-27
BEHAVIORAL PROFILING OF SCADA NETWORK TRAFFIC USING MACHINE LEARNING ALGORITHMS THESIS Jessica R. Werling, Captain, USAF AFIT-ENG-14-M-81 DEPARTMENT...subject to copyright protection in the United States. AFIT-ENG-14-M-81 BEHAVIORAL PROFILING OF SCADA NETWORK TRAFFIC USING MACHINE LEARNING ...AFIT-ENG-14-M-81 BEHAVIORAL PROFILING OF SCADA NETWORK TRAFFIC USING MACHINE LEARNING ALGORITHMS Jessica R. Werling, B.S.C.S. Captain, USAF Approved
Improved Adjoint-Operator Learning For A Neural Network
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad; Barhen, Jacob
1995-01-01
Improved method of adjoint-operator learning reduces amount of computation and associated computational memory needed to make electronic neural network learn temporally varying pattern (e.g., to recognize moving object in image) in real time. Method extension of method described in "Adjoint-Operator Learning for a Neural Network" (NPO-18352).
Learning as Issue Framing in Agricultural Innovation Networks
ERIC Educational Resources Information Center
Tisenkopfs, Talis; Kunda, Ilona; Šumane, Sandra
2014-01-01
Purpose: Networks are increasingly viewed as entities of learning and innovation in agriculture. In this article we explore learning as issue framing in two agricultural innovation networks. Design/methodology/approach: We combine frame analysis and social learning theories to analyse the processes and factors contributing to frame convergence and…
Neuromorphic Optical Signal Processing and Image Understanding for Automated Target Recognition
1989-12-01
34 Stochastic Learning Machine " Neuromorphic Target Identification * Cognitive Networks 3. Conclusions ..... ................ .. 12 4. Publications...16 5. References ...... ................... . 17 6. Appendices ....... .................. 18 I. Optoelectronic Neural Networks and...Learning Machines. II. Stochastic Optical Learning Machine. III. Learning Network for Extrapolation AccesFon For and Radar Target Identification
Personal Learning Network Clusters: A Comparison between Mathematics and Computer Science Students
ERIC Educational Resources Information Center
Harding, Ansie; Engelbrecht, Johann
2015-01-01
"Personal learning environments" (PLEs) and "personal learning networks" (PLNs) are well-known concepts. A personal learning network "cluster" is a small group of people who regularly interact academically and whose PLNs have a non-empty intersection that includes all the other members. At university level PLN…
SISL and SIRL: Two knowledge dissemination models with leader nodes on cooperative learning networks
NASA Astrophysics Data System (ADS)
Li, Jingjing; Zhang, Yumei; Man, Jiayu; Zhou, Yun; Wu, Xiaojun
2017-02-01
Cooperative learning is one of the most effective teaching methods, which has been widely used. Students' mutual contact forms a cooperative learning network in this process. Our previous research demonstrated that the cooperative learning network has complex characteristics. This study aims to investigating the dynamic spreading process of the knowledge in the cooperative learning network and the inspiration of leaders in this process. To this end, complex network transmission dynamics theory is utilized to construct the knowledge dissemination model of a cooperative learning network. Based on the existing epidemic models, we propose a new susceptible-infected-susceptible-leader (SISL) model that considers both students' forgetting and leaders' inspiration, and a susceptible-infected-removed-leader (SIRL) model that considers students' interest in spreading and leaders' inspiration. The spreading threshold λcand its impact factors are analyzed. Then, numerical simulation and analysis are delivered to reveal the dynamic transmission mechanism of knowledge and leaders' role. This work is of great significance to cooperative learning theory and teaching practice. It also enriches the theory of complex network transmission dynamics.
Saliency U-Net: A regional saliency map-driven hybrid deep learning network for anomaly segmentation
NASA Astrophysics Data System (ADS)
Karargyros, Alex; Syeda-Mahmood, Tanveer
2018-02-01
Deep learning networks are gaining popularity in many medical image analysis tasks due to their generalized ability to automatically extract relevant features from raw images. However, this can make the learning problem unnecessarily harder requiring network architectures of high complexity. In case of anomaly detection, in particular, there is often sufficient regional difference between the anomaly and the surrounding parenchyma that could be easily highlighted through bottom-up saliency operators. In this paper we propose a new hybrid deep learning network using a combination of raw image and such regional maps to more accurately learn the anomalies using simpler network architectures. Specifically, we modify a deep learning network called U-Net using both the raw and pre-segmented images as input to produce joint encoding (contraction) and expansion paths (decoding) in the U-Net. We present results of successfully delineating subdural and epidural hematomas in brain CT imaging and liver hemangioma in abdominal CT images using such network.
Miconi, Thomas
2017-01-01
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528
Miconi, Thomas
2017-02-23
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.
How and What Do Academics Learn through Their Personal Networks
ERIC Educational Resources Information Center
Pataraia, Nino; Margaryan, Anoush; Falconer, Isobel; Littlejohn, Allison
2015-01-01
This paper investigates the role of personal networks in academics' learning in relation to teaching. Drawing on in-depth interviews with 11 academics, this study examines, first, how and what academics learn through their personal networks; second, the perceived value of networks in relation to academics' professional development; and, third,…
Statewide Work-Based Learning Intermediary Network: Fiscal Year 2014 Report
ERIC Educational Resources Information Center
Iowa Department of Education, 2014
2014-01-01
The Statewide Work-based Learning Intermediary Network Fiscal Year 2014 Report summarizes fiscal year 2014 (FY14) work-based learning activities of the 15 regional intermediary networks. This report includes activities which occurred between October 1, 2013, to June 30, 2014. It is notable that some intermediary regional networks have been in…
Networking for Teacher Learning: Toward a Theory of Effective Design.
ERIC Educational Resources Information Center
McDonald, Joseph P.; Klein, Emily J.
2003-01-01
Examines how teacher networks design for teacher learning, describing several dynamic tensions inherent in the designs of a sample of teacher networks and assessing the relationships of these tensions to teacher learning. The paper illustrates these design concepts with reference to the work of seven networks that aim to revamp teacher' knowledge…
Network reciprocity by coexisting learning and teaching strategies
NASA Astrophysics Data System (ADS)
Tanimoto, Jun; Brede, Markus; Yamauchi, Atsuo
2012-03-01
We propose a network reciprocity model in which an agent probabilistically adopts learning or teaching strategies. In the learning adaptation mechanism, an agent may copy a neighbor's strategy through Fermi pairwise comparison. The teaching adaptation mechanism involves an agent imposing its strategy on a neighbor. Our simulations reveal that the reciprocity is significantly affected by the frequency with which learning and teaching agents coexist in a network and by the structure of the network itself.
Peer Apprenticeship Learning in Networked Learning Communities: The Diffusion of Epistemic Learning
ERIC Educational Resources Information Center
Jamaludin, Azilawati; Shaari, Imran
2016-01-01
This article discusses peer apprenticeship learning (PAL) as situated within networked learning communities (NLCs). The context revolves around the diffusion of technologically-mediated learning in Singapore schools, where teachers begin to implement inquiry-oriented learning, consistent with 21st century learning, among students. As these schools…
Deep Logic Networks: Inserting and Extracting Knowledge From Deep Belief Networks.
Tran, Son N; d'Avila Garcez, Artur S
2018-02-01
Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language-a set of logical rules that we call confidence rules-and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural-symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance.
Experiments on Learning by Back Propagation.
ERIC Educational Resources Information Center
Plaut, David C.; And Others
This paper describes further research on a learning procedure for layered networks of deterministic, neuron-like units, described by Rumelhart et al. The units, the way they are connected, the learning procedure, and the extension to iterative networks are presented. In one experiment, a network learns a set of filters, enabling it to discriminate…
Just the Facts: Personal Learning Networks
ERIC Educational Resources Information Center
Nussbaum-Beach, Sheryl
2013-01-01
One has heard about personal learning networks (PLNs), but what are they and how are they different than professional learning communities (PLCs)? Find out how PLNs can help a teacher pursue his/her own professional interests and be a better teacher. This article answers questions related to PLNs such as: (1) What are personal learning networks?;…
Development of a biotinylated DNA probe for detection of infectious hematopoietic necrosis virus
Deering, R.E.; Arakawa, C.K.; Oshima, K.H.; O'Hara, P.J.; Landolt, M.L.; Winton, J.R.
1991-01-01
A nonrad~oact~ve DNA probe assay was developed to detect and ~dent~fy infect~ous hernatopoiet~c necrosls virus (IHNV) uslng a dot blot format The probe a synthet~c DNA oligonucleot~de labeled enzymatlcally w~th biotln hybnd~zed spec~f~cally w~th nucleocaps~d mRNA extracted from Infected cells early In the vlrus repl~cation cycle A rap~d guan~dln~um th~ocyanate based RNA extraction method uslng RNAzol B and rn~crocentrifuge tubes eff~c~ently pioduced h~gh qual~ty RNA from 3 commonly used f~sh cell llnes, CHSE-214, CHH-1, and EPC The probe reacted with 6 d~verse ~solates of IHNV, but d~d not react \
Uddin, Raihan; Singh, Shiva M.
2017-01-01
As humans age many suffer from a decrease in normal brain functions including spatial learning impairments. This study aimed to better understand the molecular mechanisms in age-associated spatial learning impairment (ASLI). We used a mathematical modeling approach implemented in Weighted Gene Co-expression Network Analysis (WGCNA) to create and compare gene network models of young (learning unimpaired) and aged (predominantly learning impaired) brains from a set of exploratory datasets in rats in the context of ASLI. The major goal was to overcome some of the limitations previously observed in the traditional meta- and pathway analysis using these data, and identify novel ASLI related genes and their networks based on co-expression relationship of genes. This analysis identified a set of network modules in the young, each of which is highly enriched with genes functioning in broad but distinct GO functional categories or biological pathways. Interestingly, the analysis pointed to a single module that was highly enriched with genes functioning in “learning and memory” related functions and pathways. Subsequent differential network analysis of this “learning and memory” module in the aged (predominantly learning impaired) rats compared to the young learning unimpaired rats allowed us to identify a set of novel ASLI candidate hub genes. Some of these genes show significant repeatability in networks generated from independent young and aged validation datasets. These hub genes are highly co-expressed with other genes in the network, which not only show differential expression but also differential co-expression and differential connectivity across age and learning impairment. The known function of these hub genes indicate that they play key roles in critical pathways, including kinase and phosphatase signaling, in functions related to various ion channels, and in maintaining neuronal integrity relating to synaptic plasticity and memory formation. Taken together, they provide a new insight and generate new hypotheses into the molecular mechanisms responsible for age associated learning impairment, including spatial learning. PMID:29066959
Uddin, Raihan; Singh, Shiva M
2017-01-01
As humans age many suffer from a decrease in normal brain functions including spatial learning impairments. This study aimed to better understand the molecular mechanisms in age-associated spatial learning impairment (ASLI). We used a mathematical modeling approach implemented in Weighted Gene Co-expression Network Analysis (WGCNA) to create and compare gene network models of young (learning unimpaired) and aged (predominantly learning impaired) brains from a set of exploratory datasets in rats in the context of ASLI. The major goal was to overcome some of the limitations previously observed in the traditional meta- and pathway analysis using these data, and identify novel ASLI related genes and their networks based on co-expression relationship of genes. This analysis identified a set of network modules in the young, each of which is highly enriched with genes functioning in broad but distinct GO functional categories or biological pathways. Interestingly, the analysis pointed to a single module that was highly enriched with genes functioning in "learning and memory" related functions and pathways. Subsequent differential network analysis of this "learning and memory" module in the aged (predominantly learning impaired) rats compared to the young learning unimpaired rats allowed us to identify a set of novel ASLI candidate hub genes. Some of these genes show significant repeatability in networks generated from independent young and aged validation datasets. These hub genes are highly co-expressed with other genes in the network, which not only show differential expression but also differential co-expression and differential connectivity across age and learning impairment. The known function of these hub genes indicate that they play key roles in critical pathways, including kinase and phosphatase signaling, in functions related to various ion channels, and in maintaining neuronal integrity relating to synaptic plasticity and memory formation. Taken together, they provide a new insight and generate new hypotheses into the molecular mechanisms responsible for age associated learning impairment, including spatial learning.
Hebbian based learning with winner-take-all for spiking neural networks
NASA Astrophysics Data System (ADS)
Gupta, Ankur; Long, Lyle
2009-03-01
Learning methods for spiking neural networks are not as well developed as the traditional neural networks that widely use back-propagation training. We propose and implement a Hebbian based learning method with winner-take-all competition for spiking neural networks. This approach is spike time dependent which makes it naturally well suited for a network of spiking neurons. Homeostasis with Hebbian learning is implemented which ensures stability and quicker learning. Homeostasis implies that the net sum of incoming weights associated with a neuron remains the same. Winner-take-all is also implemented for competitive learning between output neurons. We implemented this learning rule on a biologically based vision processing system that we are developing, and use layers of leaky integrate and fire neurons. The network when presented with 4 bars (or Gabor filters) of different orientation learns to recognize the bar orientations (or Gabor filters). After training, each output neuron learns to recognize a bar at specific orientation and responds by firing more vigorously to that bar and less vigorously to others. These neurons are found to have bell shaped tuning curves and are similar to the simple cells experimentally observed by Hubel and Wiesel in the striate cortex of cat and monkey.
Bidirectional extreme learning machine for regression problem and its learning effectiveness.
Yang, Yimin; Wang, Yaonan; Yuan, Xiaofang
2012-09-01
It is clear that the learning effectiveness and learning speed of neural networks are in general far slower than required, which has been a major bottleneck for many applications. Recently, a simple and efficient learning method, referred to as extreme learning machine (ELM), was proposed by Huang , which has shown that, compared to some conventional methods, the training time of neural networks can be reduced by a thousand times. However, one of the open problems in ELM research is whether the number of hidden nodes can be further reduced without affecting learning effectiveness. This brief proposes a new learning algorithm, called bidirectional extreme learning machine (B-ELM), in which some hidden nodes are not randomly selected. In theory, this algorithm tends to reduce network output error to 0 at an extremely early learning stage. Furthermore, we find a relationship between the network output error and the network output weights in the proposed B-ELM. Simulation results demonstrate that the proposed method can be tens to hundreds of times faster than other incremental ELM algorithms.
VLSI Implementation of Neuromorphic Learning Networks
1993-03-31
AND DATES COVEREDFINAL/O1 AUG 90 TO 31 MAR 93 4. TITLE AND SUBTII1L S. FUNDING NUMBERS VLSI IMPLEMENTATION OF NEUROMORPHIC LEARNING NETWORKS (U) 6...Standard Form 298 (Rev 2-89) rtrfbc byv nN$I A Z’Si - 8 9- A* qip. COVER SHEET VLSI Implementation of Neuromorphic Learning Networks Contract Number... Neuromorphic Learning Networks Sponsored by Defense Advanced Research Projects Agency DARPA Order No. 7013 Monitored by AFOSR Under Contract No. F49620-90-C
ERIC Educational Resources Information Center
Casquero, Oskar; Ovelar, Ramón; Romo, Jesús; Benito, Manuel; Alberdi, Mikel
2016-01-01
The main objective of this paper is to analyse the effect of the affordances of a virtual learning environment and a personal learning environment (PLE) in the configuration of the students' personal networks in a higher education context. The results are discussed in light of the adaptation of the students to the learning network made up by two…
Learning oncogenetic networks by reducing to mixed integer linear programming.
Shahrabi Farahani, Hossein; Lagergren, Jens
2013-01-01
Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.
ERIC Educational Resources Information Center
van der Meij, Marjoleine G.; Kupper, Frank; Beers, Pieter J.; Broerse, Jacqueline E. W.
2016-01-01
E-learning and storytelling approaches can support informal vicarious learning within geographically widely distributed multi-stakeholder collaboration networks. This case study evaluates hybrid e-learning and video-storytelling approach "TransLearning" by investigation into how its storytelling e-tool supported informal vicarious…
The 3 R's of Learning Time: Rethink, Reshape, Reclaim
ERIC Educational Resources Information Center
Sackey, Shera Carter
2012-01-01
The Learning School Alliance is a network of schools collaborating about professional practice. The network embodies Learning Forward's purpose to advance effective job-embedded professional learning that leads to student outcomes. A key component of Learning Forward's Standards for Professional Learning is a focus on collaborative learning,…
Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills
Ellefsen, Kai Olav; Mouret, Jean-Baptiste; Clune, Jeff
2015-01-01
A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting. PMID:25837826
Neural modularity helps organisms evolve to learn new skills without forgetting old skills.
Ellefsen, Kai Olav; Mouret, Jean-Baptiste; Clune, Jeff
2015-04-01
A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.
Language Views on Social Networking Sites for Language Learning: The Case of Busuu
ERIC Educational Resources Information Center
Álvarez Valencia, José Aldemar
2016-01-01
Social networking has compelled the area of computer-assisted language learning (CALL) to expand its research palette and account for new virtual ecologies that afford language learning and socialization. This study focuses on Busuu, a social networking site for language learning (SNSLL), and analyzes the views of language that are enacted through…
Valt, Christian; Klein, Christoph; Boehm, Stephan G
2015-08-01
Repetition priming is a prominent example of non-declarative memory, and it increases the accuracy and speed of responses to repeatedly processed stimuli. Major long-hold memory theories posit that repetition priming results from facilitation within perceptual and conceptual networks for stimulus recognition and categorization. Stimuli can also be bound to particular responses, and it has recently been suggested that this rapid response learning, not network facilitation, provides a sound theory of priming of object recognition. Here, we addressed the relevance of network facilitation and rapid response learning for priming of person recognition with a view to advance general theories of priming. In four experiments, participants performed conceptual decisions like occupation or nationality judgments for famous faces. The magnitude of rapid response learning varied across experiments, and rapid response learning co-occurred and interacted with facilitation in perceptual and conceptual networks. These findings indicate that rapid response learning and facilitation in perceptual and conceptual networks are complementary rather than competing theories of priming. Thus, future memory theories need to incorporate both rapid response learning and network facilitation as individual facets of priming. © 2014 The British Psychological Society.
How to Trigger Emergence and Self-Organisation in Learning Networks
NASA Astrophysics Data System (ADS)
Brouns, Francis; Fetter, Sibren; van Rosmalen, Peter
The previous chapters of this section discussed why the social structure of Learning Networks is important and present guidelines on how to maintain and allow the emergence of communities in Learning Networks. Chapter 2 explains how Learning Networks rely on social interaction and active participations of the participants. Chapter 3 then continues by presenting guidelines and policies that should be incorporated into Learning Network Services in order to maintain existing communities by creating conditions that promote social interaction and knowledge sharing. Chapter 4 discusses the necessary conditions required for knowledge sharing to occur and to trigger communities to self-organise and emerge. As pointed out in Chap. 4, ad-hoc transient communities facilitate the emergence of social interaction in Learning Networks, self-organising them into communities, taking into account personal characteristics, community characteristics and general guidelines. As explained in Chap. 4 community members would benefit from a service that brings suitable people together for a specific purpose, because it will allow the participant to focus on the knowledge sharing process by reducing the effort or costs. In the current chapter, we describe an example of a peer support Learning Network Service based on the mechanism of peer tutoring in ad-hoc transient communities.
Li, Xin; Verspoor, Karin; Gray, Kathleen; Barnett, Stephen
2016-01-01
This paper summarises a longitudinal analysis of learning interactions occurring over three years among health professionals in an online social network. The study employs the techniques of Social Network Analysis (SNA) and statistical modeling to identify the changes in patterns of interaction over time and test associated structural network effects. SNA results indicate overall low participation in the network, although some participants became active over time and even led discussions. In particular, the analysis has shown that a change of lead contributor results in a change in learning interaction and network structure. The analysis of structural network effects demonstrates that the interaction dynamics slow down over time, indicating that interactions in the network are more stable. The health professionals may be reluctant to share knowledge and collaborate in groups but were interested in building personal learning networks or simply seeking information.
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1995-01-01
Intelligent systems require software incorporating probabilistic reasoning, and often times learning. Networks provide a framework and methodology for creating this kind of software. This paper introduces network models based on chain graphs with deterministic nodes. Chain graphs are defined as a hierarchical combination of Bayesian and Markov networks. To model learning, plates on chain graphs are introduced to model independent samples. The paper concludes by discussing various operations that can be performed on chain graphs with plates as a simplification process or to generate learning algorithms.
Training strategy for convolutional neural networks in pedestrian gender classification
NASA Astrophysics Data System (ADS)
Ng, Choon-Boon; Tay, Yong-Haur; Goi, Bok-Min
2017-06-01
In this work, we studied a strategy for training a convolutional neural network in pedestrian gender classification with limited amount of labeled training data. Unsupervised learning by k-means clustering on pedestrian images was used to learn the filters to initialize the first layer of the network. As a form of pre-training, supervised learning for the related task of pedestrian classification was performed. Finally, the network was fine-tuned for gender classification. We found that this strategy improved the network's generalization ability in gender classification, achieving better test results when compared to random weights initialization and slightly more beneficial than merely initializing the first layer filters by unsupervised learning. This shows that unsupervised learning followed by pre-training with pedestrian images is an effective strategy to learn useful features for pedestrian gender classification.
NASA Astrophysics Data System (ADS)
Felgaer, Pablo; Britos, Paola; García-Martínez, Ramón
A Bayesian network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency; they are used to provide: a compact form to represent the knowledge and flexible methods of reasoning. Obtaining it from data is a learning process that is divided in two steps: structural learning and parametric learning. In this paper we define an automatic learning method that optimizes the Bayesian networks applied to classification, using a hybrid method of learning that combines the advantages of the induction techniques of the decision trees (TDIDT-C4.5) with those of the Bayesian networks. The resulting method is applied to prediction in health domain.
Back-propagation learning of infinite-dimensional dynamical systems.
Tokuda, Isao; Tokunaga, Ryuji; Aihara, Kazuyuki
2003-10-01
This paper presents numerical studies of applying back-propagation learning to a delayed recurrent neural network (DRNN). The DRNN is a continuous-time recurrent neural network having time delayed feedbacks and the back-propagation learning is to teach spatio-temporal dynamics to the DRNN. Since the time-delays make the dynamics of the DRNN infinite-dimensional, the learning algorithm and the learning capability of the DRNN are different from those of the ordinary recurrent neural network (ORNN) having no time-delays. First, two types of learning algorithms are developed for a class of DRNNs. Then, using chaotic signals generated from the Mackey-Glass equation and the Rössler equations, learning capability of the DRNN is examined. Comparing the learning algorithms, learning capability, and robustness against noise of the DRNN with those of the ORNN and time delay neural network, advantages as well as disadvantages of the DRNN are investigated.
QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms.
Zwartjes, Ardjan; Havinga, Paul J M; Smit, Gerard J M; Hurink, Johann L
2016-10-01
In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.
Efficient and self-adaptive in-situ learning in multilayer memristor neural networks.
Li, Can; Belkin, Daniel; Li, Yunning; Yan, Peng; Hu, Miao; Ge, Ning; Jiang, Hao; Montgomery, Eric; Lin, Peng; Wang, Zhongrui; Song, Wenhao; Strachan, John Paul; Barnell, Mark; Wu, Qing; Williams, R Stanley; Yang, J Joshua; Xia, Qiangfei
2018-06-19
Memristors with tunable resistance states are emerging building blocks of artificial neural networks. However, in situ learning on a large-scale multiple-layer memristor network has yet to be demonstrated because of challenges in device property engineering and circuit integration. Here we monolithically integrate hafnium oxide-based memristors with a foundry-made transistor array into a multiple-layer neural network. We experimentally demonstrate in situ learning capability and achieve competitive classification accuracy on a standard machine learning dataset, which further confirms that the training algorithm allows the network to adapt to hardware imperfections. Our simulation using the experimental parameters suggests that a larger network would further increase the classification accuracy. The memristor neural network is a promising hardware platform for artificial intelligence with high speed-energy efficiency.
Machine Learning and Quantum Mechanics
NASA Astrophysics Data System (ADS)
Chapline, George
The author has previously pointed out some similarities between selforganizing neural networks and quantum mechanics. These types of neural networks were originally conceived of as away of emulating the cognitive capabilities of the human brain. Recently extensions of these networks, collectively referred to as deep learning networks, have strengthened the connection between self-organizing neural networks and human cognitive capabilities. In this note we consider whether hardware quantum devices might be useful for emulating neural networks with human-like cognitive capabilities, or alternatively whether implementations of deep learning neural networks using conventional computers might lead to better algorithms for solving the many body Schrodinger equation.
Distance Learning in a Multimedia Networks Project: Main Results.
ERIC Educational Resources Information Center
Ruokamo, Heli; Pohjolainen, Seppo
2000-01-01
Discusses a goal-oriented project, focused on open learning environments using computer networks, called Distance Learning in Multimedia Networks that was part of the Finnish Multimedia Program. Describes the combined efforts of Finnish telecommunications companies, content providers, publishing houses, hardware companies, and educational…
A common neural network differentially mediates direct and social fear learning.
Lindström, Björn; Haaker, Jan; Olsson, Andreas
2018-02-15
Across species, fears often spread between individuals through social learning. Yet, little is known about the neural and computational mechanisms underlying social learning. Addressing this question, we compared social and direct (Pavlovian) fear learning showing that they showed indistinguishable behavioral effects, and involved the same cross-modal (self/other) aversive learning network, centered on the amygdala, the anterior insula (AI), and the anterior cingulate cortex (ACC). Crucially, the information flow within this network differed between social and direct fear learning. Dynamic causal modeling combined with reinforcement learning modeling revealed that the amygdala and AI provided input to this network during direct and social learning, respectively. Furthermore, the AI gated learning signals based on surprise (associability), which were conveyed to the ACC, in both learning modalities. Our findings provide insights into the mechanisms underlying social fear learning, with implications for understanding common psychological dysfunctions, such as phobias and other anxiety disorders. Copyright © 2017 Elsevier Inc. All rights reserved.
Knowledgeable Lemurs Become More Central in Social Networks.
Kulahci, Ipek G; Ghazanfar, Asif A; Rubenstein, Daniel I
2018-04-23
Strong relationships exist between social connections and information transmission [1-9], where individuals' network position plays a key role in whether or not they acquire novel information [2, 3, 5, 6]. The relationships between social connections and information acquisition may be bidirectional if learning novel information, in addition to being influenced by it, influences network position. Individuals who acquire information quickly and use it frequently may receive more affiliative behaviors [10, 11] and may thus have a central network position. However, the potential influence of learning on network centrality has not been theoretically or empirically addressed. To bridge this epistemic gap, we investigated whether ring-tailed lemurs' (Lemur catta) centrality in affiliation networks changed after they learned how to solve a novel foraging task. Lemurs who had frequently initiated interactions and approached conspecifics before the learning experiment were more likely to observe and learn the task solution. Comparing social networks before and after the learning experiment revealed that the frequently observed lemurs received more affiliative behaviors than they did before-they became more central after the experiment. This change persisted even after the task was removed and was not caused by the observed lemurs initiating more affiliative behaviors. Consequently, quantifying received and initiated interactions separately provides unique insights into the relationships between learning and centrality. While the factors that influence network position are not fully understood, our results suggest that individual differences in learning and becoming successful can play a major role in social centrality, especially when learning from others is advantageous. Copyright © 2018 Elsevier Ltd. All rights reserved.
2007-06-01
information flow involved in network attacks. This kind of information can be invaluable in learning how to best setup and defend computer networks...administrators, and those interested in learning about securing networks a way to conceptualize this complex system of computing. NTAV3D will provide a three...teaching with visual and other components can make learning more effective” (Baxley et al, 2006). A hyperbox (Alpern and Carter, 1991) is
Benefits of Cooperative Learning in Weblog Networks
ERIC Educational Resources Information Center
Wang, Jenny; Fang, Yuehchiu
2005-01-01
The purpose of this study was to explore the benefits of cooperative learning in weblog networks, focusing particularly on learning outcomes in college writing curriculum integrated with computer-mediated learning tool-weblog. The first section addressed the advantages of using weblogs in cooperative learning structure on teaching and learning.…
Investigating the Educational Value of Social Learning Networks: A Quantitative Analysis
ERIC Educational Resources Information Center
Dafoulas, Georgios; Shokri, Azam
2016-01-01
Purpose: The emergence of Education 2.0 enabled technology-enhanced learning, necessitating new pedagogical approaches, while e-learning has evolved into an instrumental pedagogy of collaboration through affordances of social media. Social learning networks and ubiquitous learning enabled individual and group learning through social engagement and…
Reward-based training of recurrent neural networks for cognitive and value-based tasks
Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing
2017-01-01
Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal’s internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task. DOI: http://dx.doi.org/10.7554/eLife.21492.001 PMID:28084991
The application of network teaching in applied optics teaching
NASA Astrophysics Data System (ADS)
Zhao, Huifu; Piao, Mingxu; Li, Lin; Liu, Dongmei
2017-08-01
Network technology has become a creative tool of changing human productivity, the rapid development of it has brought profound changes to our learning, working and life. Network technology has many advantages such as rich contents, various forms, convenient retrieval, timely communication and efficient combination of resources. Network information resources have become the new education resources, get more and more application in the education, has now become the teaching and learning tools. Network teaching enriches the teaching contents, changes teaching process from the traditional knowledge explanation into the new teaching process by establishing situation, independence and cooperation in the network technology platform. The teacher's role has shifted from teaching in classroom to how to guide students to learn better. Network environment only provides a good platform for the teaching, we can get a better teaching effect only by constantly improve the teaching content. Changchun university of science and technology introduced a BB teaching platform, on the platform, the whole optical classroom teaching and the classroom teaching can be improved. Teachers make assignments online, students learn independently offline or the group learned cooperatively, this expands the time and space of teaching. Teachers use hypertext form related knowledge of applied optics, rich cases and learning resources, set up the network interactive platform, homework submission system, message board, etc. The teaching platform simulated the learning interest of students and strengthens the interaction in the teaching.
Developing 21st century skills through the use of student personal learning networks
NASA Astrophysics Data System (ADS)
Miller, Robert D.
This research was conducted to study the development of 21st century communication, collaboration, and digital literacy skills of students at the high school level through the use of online social network tools. The importance of this study was based on evidence high school and college students are not graduating with the requisite skills of communication, collaboration, and digital literacy skills yet employers see these skills important to the success of their employees. The challenge addressed through this study was how high schools can integrate social network tools into traditional learning environments to foster the development of these 21st century skills. A qualitative research study was completed through the use of case study. One high school class in a suburban high performing town in Connecticut was selected as the research site and the sample population of eleven student participants engaged in two sets of interviews and learned through the use social network tools for one semester of the school year. The primary social network tools used were Facebook, Diigo, Google Sites, Google Docs, and Twitter. The data collected and analyzed partially supported the transfer of the theory of connectivism at the high school level. The students actively engaged in collaborative learning and research. Key results indicated a heightened engagement in learning, the development of collaborative learning and research skills, and a greater understanding of how to use social network tools for effective public communication. The use of social network tools with high school students was a positive experience that led to an increased awareness of the students as to the benefits social network tools have as a learning tool. The data supported the continued use of social network tools to develop 21st century communication, collaboration, and digital literacy skills. Future research in this area may explore emerging social network tools as well as the long term impact these tools have on the development of lifelong learning skills and quantitative data linked to student learning.
A Bayesian Active Learning Experimental Design for Inferring Signaling Networks.
Ness, Robert O; Sachs, Karen; Mallick, Parag; Vitek, Olga
2018-06-21
Machine learning methods for learning network structure are applied to quantitative proteomics experiments and reverse-engineer intracellular signal transduction networks. They provide insight into the rewiring of signaling within the context of a disease or a phenotype. To learn the causal patterns of influence between proteins in the network, the methods require experiments that include targeted interventions that fix the activity of specific proteins. However, the interventions are costly and add experimental complexity. We describe an active learning strategy for selecting optimal interventions. Our approach takes as inputs pathway databases and historic data sets, expresses them in form of prior probability distributions on network structures, and selects interventions that maximize their expected contribution to structure learning. Evaluations on simulated and real data show that the strategy reduces the detection error of validated edges as compared with an unguided choice of interventions and avoids redundant interventions, thereby increasing the effectiveness of the experiment.
ERIC Educational Resources Information Center
Winarno, Sri; Muthu, Kalaiarasi Sonai; Ling, Lew Sook
2018-01-01
This study presents students' feedback and learning impact on design and development of a multimedia learning in Direct Problem-Based Learning approach (mDPBL) for Computer Networks in Dian Nuswantoro University, Indonesia. This study examined the usefulness, contents and navigation of the multimedia learning as well as learning impacts towards…
ERIC Educational Resources Information Center
Ackland, Aileen; Swinney, Ann
2015-01-01
In this paper, we draw on Actor-Network Theories (ANT) to explore how material components functioned to create gateways and barriers to a virtual learning network in the context of a professional development module in higher education. Students were practitioners engaged in family learning in different professional roles and contexts. The data…
ERIC Educational Resources Information Center
Sai-rat, Wipa; Tesaputa, Kowat; Sriampai, Anan
2015-01-01
The objectives of this study were 1) to study the current state of and problems with the Learning Organization of the Primary School Network, 2) to develop a Learning Organization Model for the Primary School Network, and 3) to study the findings of analyses conducted using the developed Learning Organization Model to determine how to develop the…
Edmodo social learning network for elementary school mathematics learning
NASA Astrophysics Data System (ADS)
Ariani, Y.; Helsa, Y.; Ahmad, S.; Prahmana, RCI
2017-12-01
A developed instructional media can be as printed media, visual media, audio media, and multimedia. The development of instructional media can also take advantage of technological development by utilizing Edmodo social network. This research aims to develop a digital classroom learning model using Edmodo social learning network for elementary school mathematics learning which is practical, valid and effective in order to improve the quality of learning activities. The result of this research showed that the prototype of mathematics learning device for elementary school students using Edmodo was in good category. There were 72% of students passed the assessment as a result of Edmodo learning. Edmodo has become a promising way to engage students in a collaborative learning process.
Neural-Network-Development Program
NASA Technical Reports Server (NTRS)
Phillips, Todd A.
1993-01-01
NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.
Learning and coding in biological neural networks
NASA Astrophysics Data System (ADS)
Fiete, Ila Rani
How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and theoretical results on the scalability of this rule show that learning with stochastic gradient ascent may be adequately fast to explain learning in the bird. Finally, we address the more general issue of the scalability of stochastic gradient learning on quadratic cost surfaces in linear systems, as a function of system size and task characteristics, by deriving analytical expressions for the learning curves.
Towards a Social Networks Model for Online Learning & Performance
ERIC Educational Resources Information Center
Chung, Kon Shing Kenneth; Paredes, Walter Christian
2015-01-01
In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…
Social Networks, Communication Styles, and Learning Performance in a CSCL Community
ERIC Educational Resources Information Center
Cho, Hichang; Gay, Geri; Davidson, Barry; Ingraffea, Anthony
2007-01-01
The aim of this study is to empirically investigate the relationships between communication styles, social networks, and learning performance in a computer-supported collaborative learning (CSCL) community. Using social network analysis (SNA) and longitudinal survey data, we analyzed how 31 distributed learners developed collaborative learning…
Networked Learning for Agricultural Extension: A Framework for Analysis and Two Cases
ERIC Educational Resources Information Center
Kelly, Nick; Bennett, John McLean; Starasts, Ann
2017-01-01
Purpose: This paper presents economic and pedagogical motivations for adopting information and communications technology (ICT)- mediated learning networks in agricultural education and extension. It proposes a framework for networked learning in agricultural extension and contributes a theoretical and case-based rationale for adopting the…
Learning Networks--Enabling Change through Community Action Research
ERIC Educational Resources Information Center
Bleach, Josephine
2016-01-01
Learning networks are a critical element of ethos of the community action research approach taken by the Early Learning Initiative at the National College of Ireland, a community-based educational initiative in the Dublin Docklands. Key criteria for networking, whether at local, national or international level, are the individual's and…
Neural networks for self-learning control systems
NASA Technical Reports Server (NTRS)
Nguyen, Derrick H.; Widrow, Bernard
1990-01-01
It is shown how a neural network can learn of its own accord to control a nonlinear dynamic system. An emulator, a multilayered neural network, learns to identify the system's dynamic characteristics. The controller, another multilayered neural network, next learns to control the emulator. The self-trained controller is then used to control the actual dynamic system. The learning process continues as the emulator and controller improve and track the physical process. An example is given to illustrate these ideas. The 'truck backer-upper,' a neural network controller that steers a trailer truck while the truck is backing up to a loading dock, is demonstrated. The controller is able to guide the truck to the dock from almost any initial position. The technique explored should be applicable to a wide variety of nonlinear control problems.
Lifelong Learning in German Learning Cities/Regions
ERIC Educational Resources Information Center
Reghenzani-Kearns, Denise; Kearns, Peter
2012-01-01
This paper traces the policies and lessons learned from two consecutive German national programs aimed at developing learning cities/regions. Known as Learning Regions Promotion of Networks, this first program transitioned into the current program, Learning on Place. A case study chosen is from the Tolzer region where a network has self-sustained…
A Self-Organizing Incremental Neural Network based on local distribution learning.
Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi
2016-12-01
In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Online Learning of Genetic Network Programming and its Application to Prisoner’s Dilemma Game
NASA Astrophysics Data System (ADS)
Mabu, Shingo; Hirasawa, Kotaro; Hu, Jinglu; Murata, Junichi
A new evolutionary model with the network structure named Genetic Network Programming (GNP) has been proposed recently. GNP, that is, an expansion of GA and GP, represents solutions as a network structure and evolves it by using “offline learning (selection, mutation, crossover)”. GNP can memorize the past action sequences in the network flow, so it can deal with Partially Observable Markov Decision Process (POMDP) well. In this paper, in order to improve the ability of GNP, Q learning (an off-policy TD control algorithm) that is one of the famous online methods is introduced for online learning of GNP. Q learning is suitable for GNP because (1) in reinforcement learning, the rewards an agent will get in the future can be estimated, (2) TD control doesn’t need much memory and can learn quickly, and (3) off-policy is suitable in order to search for an optimal solution independently of the policy. Finally, in the simulations, online learning of GNP is applied to a player for “Prisoner’s dilemma game” and its ability for online adaptation is confirmed.
Facilitative Components of Collaborative Learning: A Review of Nine Health Research Networks.
Leroy, Lisa; Rittner, Jessica Levin; Johnson, Karin E; Gerteis, Jessie; Miller, Therese
2017-02-01
Collaborative research networks are increasingly used as an effective mechanism for accelerating knowledge transfer into policy and practice. This paper explored the characteristics and collaborative learning approaches of nine health research networks. Semi-structured interviews with representatives from eight diverse US health services research networks conducted between November 2012 and January 2013 and program evaluation data from a ninth. The qualitative analysis assessed each network's purpose, duration, funding sources, governance structure, methods used to foster collaboration, and barriers and facilitators to collaborative learning. The authors reviewed detailed notes from the interviews to distill salient themes. Face-to-face meetings, intentional facilitation and communication, shared vision, trust among members and willingness to work together were key facilitators of collaborative learning. Competing priorities for members, limited funding and lack of long-term support and geographic dispersion were the main barriers to coordination and collaboration across research network members. The findings illustrate the importance of collaborative learning in research networks and the challenges to evaluating the success of research network functionality. Conducting readiness assessments and developing process and outcome evaluation metrics will advance the design and show the impact of collaborative research networks. Copyright © 2017 Longwoods Publishing.
Exploring Practice-Research Networks for Critical Professional Learning
ERIC Educational Resources Information Center
Appleby, Yvon; Hillier, Yvonne
2012-01-01
This paper discusses the contribution that practice-research networks can make to support critical professional development in the Learning and Skills sector in England. By practice-research networks we mean groups or networks which maintain a connection between research and professional practice. These networks stem from the philosophy of…
Graduate Employability: The Perspective of Social Network Learning
ERIC Educational Resources Information Center
Chen, Yong
2017-01-01
This study provides a conceptual framework for understanding how the graduate acquire employability through the social network in the Chinese context, using insights from the social network theory. This paper builds a conceptual model of the relationship among social network, social network learning and the graduate employability, and uses…
Learning polynomial feedforward neural networks by genetic programming and backpropagation.
Nikolaev, N Y; Iba, H
2003-01-01
This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.
Single-hidden-layer feed-forward quantum neural network based on Grover learning.
Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min
2013-09-01
In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
Distributed learning automata-based algorithm for community detection in complex networks
NASA Astrophysics Data System (ADS)
Khomami, Mohammad Mehdi Daliri; Rezvanian, Alireza; Meybodi, Mohammad Reza
2016-03-01
Community structure is an important and universal topological property of many complex networks such as social and information networks. The detection of communities of a network is a significant technique for understanding the structure and function of networks. In this paper, we propose an algorithm based on distributed learning automata for community detection (DLACD) in complex networks. In the proposed algorithm, each vertex of network is equipped with a learning automation. According to the cooperation among network of learning automata and updating action probabilities of each automaton, the algorithm interactively tries to identify high-density local communities. The performance of the proposed algorithm is investigated through a number of simulations on popular synthetic and real networks. Experimental results in comparison with popular community detection algorithms such as walk trap, Danon greedy optimization, Fuzzy community detection, Multi-resolution community detection and label propagation demonstrated the superiority of DLACD in terms of modularity, NMI, performance, min-max-cut and coverage.
Co-Operative Learning and Development Networks.
ERIC Educational Resources Information Center
Hodgson, V.; McConnell, D.
1995-01-01
Discusses the theory, nature, and benefits of cooperative learning. Considers the Cooperative Learning and Development Network (CLDN) trial in the JITOL (Just in Time Open Learning) project and examines the relationship between theories about cooperative learning and the reality of a group of professionals participating in a virtual cooperative…
Parameter diagnostics of phases and phase transition learning by neural networks
NASA Astrophysics Data System (ADS)
Suchsland, Philippe; Wessel, Stefan
2018-05-01
We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.
Prototype-Incorporated Emotional Neural Network.
Oyedotun, Oyebade K; Khashman, Adnan
2017-08-15
Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.
DCS-Neural-Network Program for Aircraft Control and Testing
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
2006-01-01
A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.
A Statewide Service Learning Network Ignites Teachers and Students.
ERIC Educational Resources Information Center
Monsour, Florence
Service learning, curriculum-linked community service, has proved remarkably effective in igniting students' desire to learn. In 1997, the Wisconsin Partnership in Service Learning was initiated as a cross-disciplinary, cross-institutional endeavor. Supported by a grant from Learn and Serve America, the partnership created a network throughout…
Scaffolding in Connectivist Mobile Learning Environment
ERIC Educational Resources Information Center
Ozan, Ozlem
2013-01-01
Social networks and mobile technologies are transforming learning ecology. In this changing learning environment, we find a variety of new learner needs. The aim of this study is to investigate how to provide scaffolding to the learners in connectivist mobile learning environment: (1) to learn in a networked environment; (2) to manage their…
Next-Generation Machine Learning for Biological Networks.
Camacho, Diogo M; Collins, Katherine M; Powers, Rani K; Costello, James C; Collins, James J
2018-06-14
Machine learning, a collection of data-analytical techniques aimed at building predictive models from multi-dimensional datasets, is becoming integral to modern biological research. By enabling one to generate models that learn from large datasets and make predictions on likely outcomes, machine learning can be used to study complex cellular systems such as biological networks. Here, we provide a primer on machine learning for life scientists, including an introduction to deep learning. We discuss opportunities and challenges at the intersection of machine learning and network biology, which could impact disease biology, drug discovery, microbiome research, and synthetic biology. Copyright © 2018 Elsevier Inc. All rights reserved.
"Getting Practical" and the National Network of Science Learning Centres
ERIC Educational Resources Information Center
Chapman, Georgina; Langley, Mark; Skilling, Gus; Walker, John
2011-01-01
The national network of Science Learning Centres is a co-ordinating partner in the Getting Practical--Improving Practical Work in Science programme. The principle of training provision for the "Getting Practical" programme is a cascade model. Regional trainers employed by the national network of Science Learning Centres trained the cohort of local…
The Practices of Student Network as Cooperative Learning in Ethiopia
ERIC Educational Resources Information Center
Reda, Weldemariam Nigusse; Hagos, Girmay Tsegay
2015-01-01
Student network is a teaching strategy introduced as cooperative learning to all educational levels above the upper primary schools (grade 5 and above) in Ethiopia. The study was, therefore, aimed at investigating to what extent the student network in Ethiopia is actually practiced in line with the principles of cooperative learning. Consequently,…
Hypermedia-Assisted Instruction and Second Language Learning: A Semantic-Network-Based Approach.
ERIC Educational Resources Information Center
Liu, Min
This literature review examines a hypermedia learning environment from a semantic network basis and the application of such an environment to second language learning. (A semantic network is defined as a conceptual representation of knowledge in human memory). The discussion is organized under the following headings and subheadings: (1) Advantages…
The STIN in the Tale: A Socio-Technical Interaction Perspective on Networked Learning
ERIC Educational Resources Information Center
Walker, Steve; Creanor, Linda
2009-01-01
In this paper, we go beyond what have been described as "mechanistic" accounts of e-learning to explore the complexity of relationships between people and technology as encountered in cases of networked learning. We introduce from the social informatics literature the concept of sociotechnical interaction networks which focus on the…
Enriching Professional Learning Networks: A Framework for Identification, Reflection, and Intention
ERIC Educational Resources Information Center
Krutka, Daniel G.; Carpenter, Jeffrey Paul; Trust, Torrey
2017-01-01
Many educators in the 21st century utilize social media platforms to enrich professional learning networks (PLNs). PLNs are uniquely personalized networks that can support participatory and continuous learning. Social media services can mediate professional engagements with a wide variety of people, spaces and tools that might not otherwise be…
ERIC Educational Resources Information Center
Peng, Yefei
2010-01-01
An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…
Implementation of a Framework for Collaborative Social Networks in E-Learning
ERIC Educational Resources Information Center
Maglajlic, Seid
2016-01-01
This paper describes the implementation of a framework for the construction and utilization of social networks in ELearning. These social networks aim to enhance collaboration between all E-Learning participants (i.e. both traineeto-trainee and trainee-to-tutor communication are targeted). E-Learning systems that include a so-called "social…
The Role of Action Research in the Development of Learning Networks for Entrepreneurs
ERIC Educational Resources Information Center
Brett, Valerie; Mullally, Martina; O'Gorman, Bill; Fuller-Love, Nerys
2012-01-01
Developing sustainable learning networks for entrepreneurs is the core objective of the Sustainable Learning Networks in Ireland and Wales (SLNIW) project. One research team drawn from the Centre for Enterprise Development and Regional Economy at Waterford Institute of Technology and the School of Management and Business from Aberystwyth…
Enhancing Teaching and Learning Wi-Fi Networking Using Limited Resources to Undergraduates
ERIC Educational Resources Information Center
Sarkar, Nurul I.
2013-01-01
Motivating students to learn Wi-Fi (wireless fidelity) wireless networking to undergraduate students is often difficult because many students find the subject rather technical and abstract when presented in traditional lecture format. This paper focuses on the teaching and learning aspects of Wi-Fi networking using limited hardware resources. It…
Categorical Structure among Shared Features in Networks of Early-Learned Nouns
ERIC Educational Resources Information Center
Hills, Thomas T.; Maouene, Mounir; Maouene, Josita; Sheya, Adam; Smith, Linda
2009-01-01
The shared features that characterize the noun categories that young children learn first are a formative basis of the human category system. To investigate the potential categorical information contained in the features of early-learned nouns, we examine the graph-theoretic properties of noun-feature networks. The networks are built from the…
Stable architectures for deep neural networks
NASA Astrophysics Data System (ADS)
Haber, Eldad; Ruthotto, Lars
2018-01-01
Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.
Toolkits and Libraries for Deep Learning.
Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth
2017-08-01
Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.
Deep learning for computational chemistry.
Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav
2017-06-15
The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Deep learning for computational chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav
The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less
Identifying Gatekeepers in Online Learning Networks
ERIC Educational Resources Information Center
Gursakal, Necmi; Bozkurt, Aras
2017-01-01
The rise of the networked society has not only changed our perceptions but also the definitions, roles, processes and dynamics of online learning networks. From offline to online worlds, networks are everywhere and gatekeepers are an important entity in these networks. In this context, the purpose of this paper is to explore gatekeeping and…
NASA Astrophysics Data System (ADS)
Nawir, Mukrimah; Amir, Amiza; Lynn, Ong Bi; Yaakob, Naimah; Badlishah Ahmad, R.
2018-05-01
The rapid growth of technologies might endanger them to various network attacks due to the nature of data which are frequently exchange their data through Internet and large-scale data that need to be handle. Moreover, network anomaly detection using machine learning faced difficulty when dealing the involvement of dataset where the number of labelled network dataset is very few in public and this caused many researchers keep used the most commonly network dataset (KDDCup99) which is not relevant to employ the machine learning (ML) algorithms for a classification. Several issues regarding these available labelled network datasets are discussed in this paper. The aim of this paper to build a network anomaly detection system using machine learning algorithms that are efficient, effective and fast processing. The finding showed that AODE algorithm is performed well in term of accuracy and processing time for binary classification towards UNSW-NB15 dataset.
Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco
2017-01-01
The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.
Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco
2017-01-01
The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. PMID:28377709
The research of "blind" spot in the LVQ network
NASA Astrophysics Data System (ADS)
Guo, Zhanjie; Nan, Shupo; Wang, Xiaoli
2017-04-01
Nowadays competitive neural network has been widely used in the pattern recognition, classification and other aspects, and show the great advantages compared with the traditional clustering methods. But the competitive neural networks still has inadequate in many aspects, and it needs to be further improved. Based on the learning Vector Quantization Network proposed by Learning Kohonen [1], this paper resolve the issue of the large training error, when there are "blind" spots in a network through the introduction of threshold value learning rules and finally programs the realization with Matlab.
Evolution of individual versus social learning on social networks
Tamura, Kohei; Kobayashi, Yutaka; Ihara, Yasuo
2015-01-01
A number of studies have investigated the roles played by individual and social learning in cultural phenomena and the relative advantages of the two learning strategies in variable environments. Because social learning involves the acquisition of behaviours from others, its utility depends on the availability of ‘cultural models’ exhibiting adaptive behaviours. This indicates that social networks play an essential role in the evolution of learning. However, possible effects of social structure on the evolution of learning have not been fully explored. Here, we develop a mathematical model to explore the evolutionary dynamics of learning strategies on social networks. We first derive the condition under which social learners (SLs) are selectively favoured over individual learners in a broad range of social network. We then obtain an analytical approximation of the long-term average frequency of SLs in homogeneous networks, from which we specify the condition, in terms of three relatedness measures, for social structure to facilitate the long-term evolution of social learning. Finally, we evaluate our approximation by Monte Carlo simulations in complete graphs, regular random graphs and scale-free networks. We formally show that whether social structure favours the evolution of social learning is determined by the relative magnitudes of two effects of social structure: localization in competition, by which competition between learning strategies is evaded, and localization in cultural transmission, which slows down the spread of adaptive traits. In addition, our estimates of the relatedness measures suggest that social structure disfavours the evolution of social learning when selection is weak. PMID:25631568
Evolution of individual versus social learning on social networks.
Tamura, Kohei; Kobayashi, Yutaka; Ihara, Yasuo
2015-03-06
A number of studies have investigated the roles played by individual and social learning in cultural phenomena and the relative advantages of the two learning strategies in variable environments. Because social learning involves the acquisition of behaviours from others, its utility depends on the availability of 'cultural models' exhibiting adaptive behaviours. This indicates that social networks play an essential role in the evolution of learning. However, possible effects of social structure on the evolution of learning have not been fully explored. Here, we develop a mathematical model to explore the evolutionary dynamics of learning strategies on social networks. We first derive the condition under which social learners (SLs) are selectively favoured over individual learners in a broad range of social network. We then obtain an analytical approximation of the long-term average frequency of SLs in homogeneous networks, from which we specify the condition, in terms of three relatedness measures, for social structure to facilitate the long-term evolution of social learning. Finally, we evaluate our approximation by Monte Carlo simulations in complete graphs, regular random graphs and scale-free networks. We formally show that whether social structure favours the evolution of social learning is determined by the relative magnitudes of two effects of social structure: localization in competition, by which competition between learning strategies is evaded, and localization in cultural transmission, which slows down the spread of adaptive traits. In addition, our estimates of the relatedness measures suggest that social structure disfavours the evolution of social learning when selection is weak. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Sadeghi, Zahra
2016-09-01
In this paper, I investigate conceptual categories derived from developmental processing in a deep neural network. The similarity matrices of deep representation at each layer of neural network are computed and compared with their raw representation. While the clusters generated by raw representation stand at the basic level of abstraction, conceptual categories obtained from deep representation shows a bottom-up transition procedure. Results demonstrate a developmental course of learning from specific to general level of abstraction through learned layers of representations in a deep belief network. © The Author(s) 2016.
Designing a holistic end-to-end intelligent network analysis and security platform
NASA Astrophysics Data System (ADS)
Alzahrani, M.
2018-03-01
Firewall protects a network from outside attacks, however, once an attack entering a network, it is difficult to detect. Recent significance accidents happened. i.e.: millions of Yahoo email account were stolen and crucial data from institutions are held for ransom. Within two year Yahoo’s system administrators were not aware that there are intruder inside the network. This happened due to the lack of intelligent tools to monitor user behaviour in internal network. This paper discusses a design of an intelligent anomaly/malware detection system with proper proactive actions. The aim is to equip the system administrator with a proper tool to battle the insider attackers. The proposed system adopts machine learning to analyse user’s behaviour through the runtime behaviour of each node in the network. The machine learning techniques include: deep learning, evolving machine learning perceptron, hybrid of Neural Network and Fuzzy, as well as predictive memory techniques. The proposed system is expanded to deal with larger network using agent techniques.
Wang, Quan; Rothkopf, Constantin A; Triesch, Jochen
2017-08-01
The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences.
Functional connectivity changes in second language vocabulary learning.
Ghazi Saidi, Ladan; Perlbarg, Vincent; Marrelec, Guillaume; Pélégrini-Issac, Mélani; Benali, Habib; Ansaldo, Ana-Inés
2013-01-01
Functional connectivity changes in the language network (Price, 2010), and in a control network involved in second language (L2) processing (Abutalebi & Green, 2007) were examined in a group of Persian (L1) speakers learning French (L2) words. Measures of network integration that characterize the global integrative state of a network (Marrelec, Bellec et al., 2008) were gathered, in the shallow and consolidation phases of L2 vocabulary learning. Functional connectivity remained unchanged across learning phases for L1, whereas total, between- and within-network integration levels decreased as proficiency for L2 increased. The results of this study provide the first functional connectivity evidence regarding the dynamic role of the language processing and cognitive control networks in L2 learning (Abutalebi, Cappa, & Perani, 2005; Altarriba & Heredia, 2008; Leonard et al., 2011; Parker-Jones et al., 2011). Thus, increased proficiency results in a higher degree of automaticity and lower cognitive effort (Segalowitz & Hulstijn, 2005). Copyright © 2012 Elsevier Inc. All rights reserved.
Pragmatically Framed Cross-Situational Noun Learning Using Computational Reinforcement Models
Najnin, Shamima; Banerjee, Bonny
2018-01-01
Cross-situational learning and social pragmatic theories are prominent mechanisms for learning word meanings (i.e., word-object pairs). In this paper, the role of reinforcement is investigated for early word-learning by an artificial agent. When exposed to a group of speakers, the agent comes to understand an initial set of vocabulary items belonging to the language used by the group. Both cross-situational learning and social pragmatic theory are taken into account. As social cues, joint attention and prosodic cues in caregiver's speech are considered. During agent-caregiver interaction, the agent selects a word from the caregiver's utterance and learns the relations between that word and the objects in its visual environment. The “novel words to novel objects” language-specific constraint is assumed for computing rewards. The models are learned by maximizing the expected reward using reinforcement learning algorithms [i.e., table-based algorithms: Q-learning, SARSA, SARSA-λ, and neural network-based algorithms: Q-learning for neural network (Q-NN), neural-fitted Q-network (NFQ), and deep Q-network (DQN)]. Neural network-based reinforcement learning models are chosen over table-based models for better generalization and quicker convergence. Simulations are carried out using mother-infant interaction CHILDES dataset for learning word-object pairings. Reinforcement is modeled in two cross-situational learning cases: (1) with joint attention (Attentional models), and (2) with joint attention and prosodic cues (Attentional-prosodic models). Attentional-prosodic models manifest superior performance to Attentional ones for the task of word-learning. The Attentional-prosodic DQN outperforms existing word-learning models for the same task. PMID:29441027
Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.
Costa, Marcelo Azevedo; Braga, Antonio Padua; de Menezes, Benjamin Rodrigues
2012-09-01
The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Learning Framework for Winner-Take-All Networks with Stochastic Synapses.
Mostafa, Hesham; Cauwenberghs, Gert
2018-06-01
Many recent generative models make use of neural networks to transform the probability distribution of a simple low-dimensional noise process into the complex distribution of the data. This raises the question of whether biological networks operate along similar principles to implement a probabilistic model of the environment through transformations of intrinsic noise processes. The intrinsic neural and synaptic noise processes in biological networks, however, are quite different from the noise processes used in current abstract generative networks. This, together with the discrete nature of spikes and local circuit interactions among the neurons, raises several difficulties when using recent generative modeling frameworks to train biologically motivated models. In this letter, we show that a biologically motivated model based on multilayer winner-take-all circuits and stochastic synapses admits an approximate analytical description. This allows us to use the proposed networks in a variational learning setting where stochastic backpropagation is used to optimize a lower bound on the data log likelihood, thereby learning a generative model of the data. We illustrate the generality of the proposed networks and learning technique by using them in a structured output prediction task and a semisupervised learning task. Our results extend the domain of application of modern stochastic network architectures to networks where synaptic transmission failure is the principal noise mechanism.
Facilitative Components of Collaborative Learning: A Review of Nine Health Research Networks
Rittner, Jessica Levin; Johnson, Karin E.; Gerteis, Jessie; Miller, Therese
2017-01-01
Objective: Collaborative research networks are increasingly used as an effective mechanism for accelerating knowledge transfer into policy and practice. This paper explored the characteristics and collaborative learning approaches of nine health research networks. Data sources/study setting: Semi-structured interviews with representatives from eight diverse US health services research networks conducted between November 2012 and January 2013 and program evaluation data from a ninth. Study design: The qualitative analysis assessed each network's purpose, duration, funding sources, governance structure, methods used to foster collaboration, and barriers and facilitators to collaborative learning. Data collection: The authors reviewed detailed notes from the interviews to distill salient themes. Principal findings: Face-to-face meetings, intentional facilitation and communication, shared vision, trust among members and willingness to work together were key facilitators of collaborative learning. Competing priorities for members, limited funding and lack of long-term support and geographic dispersion were the main barriers to coordination and collaboration across research network members. Conclusion: The findings illustrate the importance of collaborative learning in research networks and the challenges to evaluating the success of research network functionality. Conducting readiness assessments and developing process and outcome evaluation metrics will advance the design and show the impact of collaborative research networks. PMID:28277202
ERIC Educational Resources Information Center
Lin, Yu-Tzu; Chen, Ming-Puu; Chang, Chia-Hu; Chang, Pu-Chen
2017-01-01
The benefits of social learning have been recognized by existing research. To explore knowledge distribution in social learning and its effects on learning achievement, we developed a social learning platform and explored students' behaviors of peer interactions by the proposed algorithms based on social network analysis. An empirical study was…
ERIC Educational Resources Information Center
Hsieh, Hsiu-Wei
2012-01-01
The proliferation of information and communication technologies and the prevalence of online social networks have facilitated the opportunities for informal learning of foreign languages. However, little educational research has been conducted on how individuals utilize those social networks to take part in self-initiated language learning without…
Dialogue, Language and Identity: Critical Issues for Networked Management Learning
ERIC Educational Resources Information Center
Ferreday, Debra; Hodgson, Vivien; Jones, Chris
2006-01-01
This paper draws on the work of Mikhail Bakhtin and Norman Fairclough to show how dialogue is central to the construction of identity in networked management learning. The paper is based on a case study of a networked management learning course in higher education and attempts to illustrate how participants negotiate issues of difference,…
The Fire Learning Network: A promising conservation strategy for forestry
Bruce E. Goldstein; William H. Butler; R. Bruce Hull
2010-01-01
Conservation Learning Networks (CLN) are an emerging conservation strategy for addressing complex resource management challenges that face the forestry profession. The US Fire Learning Network (FLN) is a successful example of a CLN that operates on a national scale. Developed in 2001 as a partnership between The Nature Conservancy, the US Forest Service, and land-...
Feature Biases in Early Word Learning: Network Distinctiveness Predicts Age of Acquisition
ERIC Educational Resources Information Center
Engelthaler, Tomas; Hills, Thomas T.
2017-01-01
Do properties of a word's features influence the order of its acquisition in early word learning? Combining the principles of mutual exclusivity and shape bias, the present work takes a network analysis approach to understanding how feature distinctiveness predicts the order of early word learning. Distance networks were built from nouns with edge…
Professional Online Presence and Learning Networks: Educating for Ethical Use of Social Media
ERIC Educational Resources Information Center
Forbes, Dianne
2017-01-01
In a teacher education context, this study considers the use of social media for building a professional online presence and learning network. This article provides an overview of uses of social media in teacher education, presents a case study of key processes in relation to professional online presence and learning networks, and highlights…
Kepinska, Olga; de Rover, Mischa; Caspers, Johanneke; Schiller, Niels O
2017-03-01
In an effort to advance the understanding of brain function and organisation accompanying second language learning, we investigate the neural substrates of novel grammar learning in a group of healthy adults, consisting of participants with high and average language analytical abilities (LAA). By means of an Independent Components Analysis, a data-driven approach to functional connectivity of the brain, the fMRI data collected during a grammar-learning task were decomposed into maps representing separate cognitive processes. These included the default mode, task-positive, working memory, visual, cerebellar and emotional networks. We further tested for differences within the components, representing individual differences between the High and Average LAA learners. We found high analytical abilities to be coupled with stronger contributions to the task-positive network from areas adjacent to bilateral Broca's region, stronger connectivity within the working memory network and within the emotional network. Average LAA participants displayed stronger engagement within the task-positive network from areas adjacent to the right-hemisphere homologue of Broca's region and typical to lower level processing (visual word recognition), and increased connectivity within the default mode network. The significance of each of the identified networks for the grammar learning process is presented next to a discussion on the established markers of inter-individual learners' differences. We conclude that in terms of functional connectivity, the engagement of brain's networks during grammar acquisition is coupled with one's language learning abilities. Copyright © 2016 Elsevier B.V. All rights reserved.
A neural network model for credit risk evaluation.
Khashman, Adnan
2009-08-01
Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.
Workplace Learning in Informal Networks
ERIC Educational Resources Information Center
Milligan, Colin; Littlejohn, Allison; Margaryan, Anoush
2014-01-01
Learning does not stop when an individual leaves formal education, but becomes increasingly informal, and deeply embedded within other activities such as work. This article describes the challenges of informal learning in knowledge intensive industries, highlighting the important role of personal learning networks. The article argues that…
Professional Learning Networks Designed for Teacher Learning
ERIC Educational Resources Information Center
Trust, Torrey
2012-01-01
In the information age, students must learn to navigate and evaluate an expanding network of information. Highly effective teachers model this process of information analysis and knowledge acquisition by continually learning through collaboration, professional development, and studying pedagogical techniques and best practices. Many teachers have…
Barnett, Tony; Hoang, Ha; Cross, Merylin; Bridgman, Heather
2015-01-01
Few studies have examined interprofessional practice (IPP) from a mental health service perspective. This study applied a mixed-method approach to examine the IPP and learning occurring in a youth mental health service in Tasmania, Australia. The aims of the study were to investigate the extent to which staff were networked, how collaboratively they practiced and supported student learning, and to elicit the organisation's strengths and opportunities regarding IPP and learning. Six data sets were collected: pre- and post-test readiness for interprofessional learning surveys, Social Network survey, organisational readiness for IPP and learning checklist, "talking wall" role clarification activity, and observations of participants working through a clinical case study. Participants (n = 19) were well-networked and demonstrated a patient-centred approach. Results confirmed participants' positive attitudes to IPP and learning and identified ways to strengthen the organisation's interprofessional capability. This mixed-method approach could assist others to investigate IPP and learning.
Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric
2013-06-01
Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph--a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer's disease (AD) and reveal findings that could lead to advancements in AD research.
Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric
2014-01-01
Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph (DAG)—a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer’s disease (AD) and reveal findings that could lead to advancements in AD research. PMID:22665720
Xing, Youlu; Shen, Furao; Zhao, Jinxi
2016-03-01
The proposed perception evolution network (PEN) is a biologically inspired neural network model for unsupervised learning and online incremental learning. It is able to automatically learn suitable prototypes from learning data in an incremental way, and it does not require the predefined prototype number or the predefined similarity threshold. Meanwhile, being more advanced than the existing unsupervised neural network model, PEN permits the emergence of a new dimension of perception in the perception field of the network. When a new dimension of perception is introduced, PEN is able to integrate the new dimensional sensory inputs with the learned prototypes, i.e., the prototypes are mapped to a high-dimensional space, which consists of both the original dimension and the new dimension of the sensory inputs. In the experiment, artificial data and real-world data are used to test the proposed PEN, and the results show that PEN can work effectively.
A European Languages Virtual Network Proposal
NASA Astrophysics Data System (ADS)
García-Peñalvo, Francisco José; González-González, Juan Carlos; Murray, Maria
ELVIN (European Languages Virtual Network) is a European Union (EU) Lifelong Learning Programme Project aimed at creating an informal social network to support and facilitate language learning. The ELVIN project aims to research and develop the connection between social networks, professional profiles and language learning in an informal educational context. At the core of the ELVIN project, there will be a web 2.0 social networking platform that connects employees/students for language practice based on their own professional/academic needs and abilities, using all relevant technologies. The ELVIN remit involves the examination of both methodological and technological issues inherent in achieving a social-based learning platform that provides the user with their own customized Personal Learning Environment for EU language acquisition. ELVIN started in November 2009 and this paper presents the project aims and objectives as well as the development and implementation of the web platform.
Bayesian Network Webserver: a comprehensive tool for biological network modeling.
Ziebarth, Jesse D; Bhattacharya, Anindya; Cui, Yan
2013-11-01
The Bayesian Network Webserver (BNW) is a platform for comprehensive network modeling of systems genetics and other biological datasets. It allows users to quickly and seamlessly upload a dataset, learn the structure of the network model that best explains the data and use the model to understand relationships between network variables. Many datasets, including those used to create genetic network models, contain both discrete (e.g. genotype) and continuous (e.g. gene expression traits) variables, and BNW allows for modeling hybrid datasets. Users of BNW can incorporate prior knowledge during structure learning through an easy-to-use structural constraint interface. After structure learning, users are immediately presented with an interactive network model, which can be used to make testable hypotheses about network relationships. BNW, including a downloadable structure learning package, is available at http://compbio.uthsc.edu/BNW. (The BNW interface for adding structural constraints uses HTML5 features that are not supported by current version of Internet Explorer. We suggest using other browsers (e.g. Google Chrome or Mozilla Firefox) when accessing BNW). ycui2@uthsc.edu. Supplementary data are available at Bioinformatics online.
Functional brain networks for learning predictive statistics.
Giorgio, Joseph; Karlaftis, Vasilis M; Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew; Kourtzi, Zoe
2017-08-18
Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. This skill relies on extracting regular patterns in space and time by mere exposure to the environment (i.e., without explicit feedback). Yet, we know little about the functional brain networks that mediate this type of statistical learning. Here, we test whether changes in the processing and connectivity of functional brain networks due to training relate to our ability to learn temporal regularities. By combining behavioral training and functional brain connectivity analysis, we demonstrate that individuals adapt to the environment's statistics as they change over time from simple repetition to probabilistic combinations. Further, we show that individual learning of temporal structures relates to decision strategy. Our fMRI results demonstrate that learning-dependent changes in fMRI activation within and functional connectivity between brain networks relate to individual variability in strategy. In particular, extracting the exact sequence statistics (i.e., matching) relates to changes in brain networks known to be involved in memory and stimulus-response associations, while selecting the most probable outcomes in a given context (i.e., maximizing) relates to changes in frontal and striatal networks. Thus, our findings provide evidence that dissociable brain networks mediate individual ability in learning behaviorally-relevant statistics. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Competitive STDP Learning of Overlapping Spatial Patterns.
Krunglevicius, Dalius
2015-08-01
Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron's responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.
Predicting the survival of diabetes using neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Data mining techniques at the present time are used in predicting diseases of health care industries. Neural Network is one among the prevailing method in data mining techniques of an intelligent field for predicting diseases in health care industries. This paper presents a study on the prediction of the survival of diabetes diseases using different learning algorithms from the supervised learning algorithms of neural network. Three learning algorithms are considered in this study: (i) The levenberg-marquardt learning algorithm (ii) The Bayesian regulation learning algorithm and (iii) The scaled conjugate gradient learning algorithm. The network is trained using the Pima Indian Diabetes Dataset with the help of MATLAB R2014(a) software. The performance of each algorithm is further discussed through regression analysis. The prediction accuracy of the best algorithm is further computed to validate the accurate prediction
Sampling from complex networks using distributed learning automata
NASA Astrophysics Data System (ADS)
Rezvanian, Alireza; Rahmati, Mohammad; Meybodi, Mohammad Reza
2014-02-01
A complex network provides a framework for modeling many real-world phenomena in the form of a network. In general, a complex network is considered as a graph of real world phenomena such as biological networks, ecological networks, technological networks, information networks and particularly social networks. Recently, major studies are reported for the characterization of social networks due to a growing trend in analysis of online social networks as dynamic complex large-scale graphs. Due to the large scale and limited access of real networks, the network model is characterized using an appropriate part of a network by sampling approaches. In this paper, a new sampling algorithm based on distributed learning automata has been proposed for sampling from complex networks. In the proposed algorithm, a set of distributed learning automata cooperate with each other in order to take appropriate samples from the given network. To investigate the performance of the proposed algorithm, several simulation experiments are conducted on well-known complex networks. Experimental results are compared with several sampling methods in terms of different measures. The experimental results demonstrate the superiority of the proposed algorithm over the others.
Motor imagery learning modulates functional connectivity of multiple brain systems in resting state.
Zhang, Hang; Long, Zhiying; Ge, Ruiyang; Xu, Lele; Jin, Zhen; Yao, Li; Liu, Yijun
2014-01-01
Learning motor skills involves subsequent modulation of resting-state functional connectivity in the sensory-motor system. This idea was mostly derived from the investigations on motor execution learning which mainly recruits the processing of sensory-motor information. Behavioral evidences demonstrated that motor skills in our daily lives could be learned through imagery procedures. However, it remains unclear whether the modulation of resting-state functional connectivity also exists in the sensory-motor system after motor imagery learning. We performed a fMRI investigation on motor imagery learning from resting state. Based on previous studies, we identified eight sensory and cognitive resting-state networks (RSNs) corresponding to the brain systems and further explored the functional connectivity of these RSNs through the assessments, connectivity and network strengths before and after the two-week consecutive learning. Two intriguing results were revealed: (1) The sensory RSNs, specifically sensory-motor and lateral visual networks exhibited greater connectivity strengths in precuneus and fusiform gyrus after learning; (2) Decreased network strength induced by learning was proved in the default mode network, a cognitive RSN. These results indicated that resting-state functional connectivity could be modulated by motor imagery learning in multiple brain systems, and such modulation displayed in the sensory-motor, visual and default brain systems may be associated with the establishment of motor schema and the regulation of introspective thought. These findings further revealed the neural substrates underlying motor skill learning and potentially provided new insights into the therapeutic benefits of motor imagery learning.
Spatial features of synaptic adaptation affecting learning performance.
Berger, Damian L; de Arcangelis, Lucilla; Herrmann, Hans J
2017-09-08
Recent studies have proposed that the diffusion of messenger molecules, such as monoamines, can mediate the plastic adaptation of synapses in supervised learning of neural networks. Based on these findings we developed a model for neural learning, where the signal for plastic adaptation is assumed to propagate through the extracellular space. We investigate the conditions allowing learning of Boolean rules in a neural network. Even fully excitatory networks show very good learning performances. Moreover, the investigation of the plastic adaptation features optimizing the performance suggests that learning is very sensitive to the extent of the plastic adaptation and the spatial range of synaptic connections.
Fast detection of the fuzzy communities based on leader-driven algorithm
NASA Astrophysics Data System (ADS)
Fang, Changjian; Mu, Dejun; Deng, Zhenghong; Hu, Jun; Yi, Chen-He
2018-03-01
In this paper, we present the leader-driven algorithm (LDA) for learning community structure in networks. The algorithm allows one to find overlapping clusters in a network, an important aspect of real networks, especially social networks. The algorithm requires no input parameters and learns the number of clusters naturally from the network. It accomplishes this using leadership centrality in a clever manner. It identifies local minima of leadership centrality as followers which belong only to one cluster, and the remaining nodes are leaders which connect clusters. In this way, the number of clusters can be learned using only the network structure. The LDA is also an extremely fast algorithm, having runtime linear in the network size. Thus, this algorithm can be used to efficiently cluster extremely large networks.
Environmental Design for a Structured Network Learning Society
ERIC Educational Resources Information Center
Chang, Ben; Cheng, Nien-Heng; Deng, Yi-Chan; Chan, Tak-Wai
2007-01-01
Social interactions profoundly impact the learning processes of learners in traditional societies. The rapid rise of the Internet using population has been the establishment of numerous different styles of network communities. Network societies form when more Internet communities are established, but the basic form of a network society, especially…
Using Social Networks to Create Powerful Learning Communities
ERIC Educational Resources Information Center
Lenox, Marianne; Coleman, Maurice
2010-01-01
Regular readers of "Computers in Libraries" are aware that social networks are forming increasingly important linkages to professional and personal development in all libraries. Live and virtual social networks have become the new learning playground for librarians and library staff. Social networks have the ability to connect those who are…
CLASH: MASS DISTRIBUTION IN AND AROUND MACS J1206.2-0847 FROM A FULL CLUSTER LENSING ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umetsu, Keiichi; Koch, Patrick M.; Lin, Kai-Yang
2012-08-10
We derive an accurate mass distribution of the galaxy cluster MACS J1206.2-0847 (z = 0.439) from a combined weak-lensing distortion, magnification, and strong-lensing analysis of wide-field Subaru BVR{sub c} I{sub c} z' imaging and our recent 16-band Hubble Space Telescope observations taken as part of the Cluster Lensing And Supernova survey with Hubble program. We find good agreement in the regions of overlap between several weak- and strong-lensing mass reconstructions using a wide variety of modeling methods, ensuring consistency. The Subaru data reveal the presence of a surrounding large-scale structure with the major axis running approximately northwest-southeast (NW-SE), aligned withmore » the cluster and its brightest galaxy shapes, showing elongation with a {approx}2: 1 axis ratio in the plane of the sky. Our full-lensing mass profile exhibits a shallow profile slope dln {Sigma}/dln R {approx} -1 at cluster outskirts (R {approx}> 1 Mpc h{sup -1}), whereas the mass distribution excluding the NW-SE excess regions steepens farther out, well described by the Navarro-Frenk-White form. Assuming a spherical halo, we obtain a virial mass M{sub vir} = (1.1 {+-} 0.2 {+-} 0.1) Multiplication-Sign 10{sup 15} M{sub Sun} h{sup -1} and a halo concentration c{sub vir} = 6.9 {+-} 1.0 {+-} 1.2 (c{sub vir} {approx} 5.7 when the central 50 kpc h{sup -1} is excluded), which falls in the range 4 {approx}< (c) {approx}< 7 of average c(M, z) predictions for relaxed clusters from recent {Lambda} cold dark matter simulations. Our full-lensing results are found to be in agreement with X-ray mass measurements where the data overlap, and when combined with Chandra gas mass measurements, they yield a cumulative gas mass fraction of 13.7{sup +4.5}{sub -3.0}% at 0.7 Mpc h{sup -1}( Almost-Equal-To 1.7 r{sub 2500}), a typical value observed for high-mass clusters.« less
Aircraft-Measured Indirect Cloud Effects from Biomass Burning Smoke in the Arctic and Subarctic
NASA Technical Reports Server (NTRS)
Zamora, L. M.; Kahn, R. A.; Cubison, M. J.; Diskin, G. S.; Jimenez, J. L.; Kondo, Y.; McFarquhar, G. M.; Nenes, A.; Thornhill, K. L.; Wisthaler, A.;
2016-01-01
The incidence of wildfires in the Arctic and subarctic is increasing; in boreal North America, for example, the burned area is expected to increase by 200-300% over the next 50-100 years, which previous studies suggest could have a large effect on cloud microphysics, lifetime, albedo, and precipitation. However, the interactions between smoke particles and clouds remain poorly quantified due to confounding meteorological influences and remote sensing limitations. Here, we use data from several aircraft campaigns in the Arctic and subarctic to explore cloud microphysics in liquid-phase clouds influenced by biomass burning. Median cloud droplet radii in smoky clouds were approx. 40- 60% smaller than in background clouds. Based on the relationship between cloud droplet number (N(liq)/ and various biomass burning tracers (BBt/ across the multi-campaign data set, we calculated the magnitude of subarctic and Arctic smoke aerosol-cloud interactions (ACIs, where ACI = (1/3) x dln(N(liq))/dln(BBt)) to be approx. 0.16 out of a maximum possible value of 0.33 that would be obtained if all aerosols were to nucleate cloud droplets. Interestingly, in a separate subarctic case study with low liquid water content (0.02 gm/cu m and very high aerosol concentrations (2000- 3000/ cu cm in the most polluted clouds, the estimated ACI value was only 0.05. In this case, competition for water vapor by the high concentration of cloud condensation nuclei (CCN) strongly limited the formation of droplets and reduced the cloud albedo effect, which highlights the importance of cloud feedbacks across scales. Using our calculated ACI values, we estimate that the smoke-driven cloud albedo effect may decrease local summertime short-wave radiative flux by between 2 and 4 W/sq m or more under some low and homogeneous cloud cover conditions in the subarctic, although the changes should be smaller in high surface albedo regions of the Arctic.We lastly explore evidence suggesting that numerous northern-latitude background Aitken particles can interact with combustion particles, perhaps impacting their properties as cloud condensation and ice nuclei.
Aircraft-Measured Indirect Cloud Effects from Biomass Burning Smoke in the Arctic and Subarctic
NASA Technical Reports Server (NTRS)
Zamora, Lauren; Kahn, R. A.; Cubison, M. C.; Diskin, G. S.; Jimenez, J. L.; Kondo, Y.; McFarquhar, G. M.; Nenes, A.; Wisthaler, A.; Zelenyuk, A.;
2016-01-01
The incidence of wildfires in the Arctic and subarctic is increasing; in boreal North America, for example, the burned area is expected to increase by 200-300 over the next 50-100 years, which previous studies suggest could have a large effect on cloud microphysics, lifetime, albedo, and precipitation. However, the interactions between smoke particles and clouds remain poorly quantified due to confounding meteorological influences and remote sensing limitations. Here, we use data from several aircraft campaigns in the Arctic and subarctic to explore cloud microphysics in liquid-phase clouds influenced by biomass burning. Median cloud droplet radii in smoky clouds were 50 smaller than in background clouds. Based on the relationship between cloud droplet number (N(liq))/ and various biomass burning tracers (BBt/ across the multi-campaign dataset, we calculated the magnitude of subarctic and Arctic smoke aerosol-cloud interactions (ACI, where ACI = (1/3) x dln(N(liq))/dln(BBt)) to be 0.12 out of a maximum possible value of 0.33 that would be obtained if all aerosols were to nucleate cloud droplets. Interestingly, in a separate subarctic case study with low liquid water content (0.02 gm/ cu m) and very high aerosol concentrations (2000-3000 cu m) in the most polluted clouds, the estimated ACI value was only 0.06. In this case, competition for water vapor by the high concentration of CCN strongly limited the formation of droplets and reduced the cloud albedo effect, which highlights the importance of cloud feedbacks across scales. Using our calculated ACI values, we estimate that the smoke-driven cloud albedo effect may decrease shortwave radiative flux by 2 and 4 W/sq or more under some low and homogeneous cloud cover conditions in the subarctic, although the changes should be smaller in high surface albedo regions of the Arctic. We lastly show evidence to suggest that numerous northern latitude background Aitken particles can interact with combustion particles, perhaps impacting their properties as cloud condensation and ice nuclei. However, the influence of background particles on smoke-driven indirect effects is currently unclear.
Goals, Motivation for, and Outcomes of Personal Learning through Networks: Results of a Tweetstorm
ERIC Educational Resources Information Center
Sie, Rory L. L.; Pataraia, Nino; Boursinou, Eleni; Rajagopal, Kamakshi; Margaryan, Anoush; Falconer, Isobel; Bitter-Rijpkema, Marlies; Littlejohn, Allison; Sloep, Peter B.
2013-01-01
Recent developments in the use of social media for learning have posed serious challenges for learners. The information overload that these online social tools create has changed the way learners learn and from whom they learn. An investigation of learners' goals, motivations and expected outcomes when using a personal learning network is…
ERIC Educational Resources Information Center
Chang, Jui-Hung; Chiu, Po-Sheng; Huang, Yueh-Min
2018-01-01
With the advances in mobile network technology, the use of portable devices and mobile networks for learning is not limited by time and space. Such use, in combination with appropriate learning strategies, can achieve a better effect. Despite the effectiveness of mobile learning, students' learning direction, progress, and achievement may differ.…
Learning in Structured Connectionist Networks
1988-04-01
the structure is too rigid and learning too difficult for cognitive modeling. Two algorithms for learning simple, feature-based concept descriptions...and learning too difficult for cognitive model- ing. Two algorithms for learning simple, feature-based concept descriptions were also implemented. The...Term Goals Recent progress in connectionist research has been encouraging; networks have success- fully modeled human performance for various cognitive
SME Innovation and Learning: The Role of Networks and Crisis Events
ERIC Educational Resources Information Center
Saunders, Mark N. K.; Gray, David E; Goregaokar, Harshita
2014-01-01
Purpose: The purpose of this paper is to contribute to the literature on innovation and entrepreneurial learning by exploring how SMEs learn and innovate, how they use both formal and informal learning and in particular the role of networks and crisis events within their learning experience. Design/methodology/approach: Mixed method study,…
The Mobile Learning Network: Getting Serious about Games Technologies for Learning
ERIC Educational Resources Information Center
Petley, Rebecca; Parker, Guy; Attewell, Jill
2011-01-01
The Mobile Learning Network currently in its third year, is a unique collaborative initiative encouraging and enabling the introduction of mobile learning in English post-14 education. The programme, funded jointly by the Learning and Skills Council and participating colleges and schools and supported by LSN has involved nearly 40,000 learners and…
Identifying Students' Difficulties When Learning Technical Skills via a Wireless Sensor Network
ERIC Educational Resources Information Center
Wang, Jingying; Wen, Ming-Lee; Jou, Min
2016-01-01
Practical training and actual application of acquired knowledge and techniques are crucial for the learning of technical skills. We established a wireless sensor network system (WSNS) based on the 5E learning cycle in a practical learning environment to improve students' reflective abilities and to reduce difficulties for the learning of technical…
ERIC Educational Resources Information Center
Lai, Horng-Ji
2011-01-01
This study examined the effect of civil servants' Self-Directed Learning Readiness (SDLR) and network literacy on their online learning effectiveness in a web-based training program. Participants were 283 civil servants enrolled in an asynchronous online learning program through an e-learning portal provided by the Regional Civil Service…
ERIC Educational Resources Information Center
Ergün, Esin; Usluel, Yasemin Koçak
2016-01-01
In this study, we assessed the communication structure in an educational online learning environment using social network analysis (SNA). The communication structure was examined with respect to time, and instructor's participation. The course was implemented using ELGG, a network learning environment, blended with face-to-face sessions over a…
Language, Learning, and Identity in Social Networking Sites for Language Learning: The Case of Busuu
ERIC Educational Resources Information Center
Alvarez Valencia, Jose Aldemar
2014-01-01
Recent progress in the discipline of computer applications such as the advent of web-based communication, afforded by the Web 2.0, has paved the way for novel applications in language learning, namely, social networking. Social networking has challenged the area of Computer Mediated Communication (CMC) to expand its research palette in order to…
NASA Astrophysics Data System (ADS)
Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min
2015-12-01
In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.
Learning and innovative elements of strategy adoption rules expand cooperative network topologies.
Wang, Shijun; Szalay, Máté S; Zhang, Changshui; Csermely, Peter
2008-04-09
Cooperation plays a key role in the evolution of complex systems. However, the level of cooperation extensively varies with the topology of agent networks in the widely used models of repeated games. Here we show that cooperation remains rather stable by applying the reinforcement learning strategy adoption rule, Q-learning on a variety of random, regular, small-word, scale-free and modular network models in repeated, multi-agent Prisoner's Dilemma and Hawk-Dove games. Furthermore, we found that using the above model systems other long-term learning strategy adoption rules also promote cooperation, while introducing a low level of noise (as a model of innovation) to the strategy adoption rules makes the level of cooperation less dependent on the actual network topology. Our results demonstrate that long-term learning and random elements in the strategy adoption rules, when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations. These results suggest that a balanced duo of learning and innovation may help to preserve cooperation during the re-organization of real-world networks, and may play a prominent role in the evolution of self-organizing, complex systems.
Model-free distributed learning
NASA Technical Reports Server (NTRS)
Dembo, Amir; Kailath, Thomas
1990-01-01
Model-free learning for synchronous and asynchronous quasi-static networks is presented. The network weights are continuously perturbed, while the time-varying performance index is measured and correlated with the perturbation signals; the correlation output determines the changes in the weights. The perturbation may be either via noise sources or orthogonal signals. The invariance to detailed network structure mitigates large variability between supposedly identical networks as well as implementation defects. This local, regular, and completely distributed mechanism requires no central control and involves only a few global signals. Thus it allows for integrated on-chip learning in large analog and optical networks.
Control Theoretic Modeling for Uncertain Cultural Attitudes and Unknown Adversarial Intent
2009-02-01
Constructive computational tools. 15. SUBJECT TERMS social learning, social networks , multiagent systems, game theory 16. SECURITY CLASSIFICATION OF: a...over- reactionary behaviors; 3) analysis of rational social learning in networks : analysis of belief propagation in social networks in various...general methodology as a predictive device for social network formation and for communication network formation with constraints on the lengths of
Neural networks supporting switching, hypothesis testing, and rule application
Liu, Zhiya; Braunlich, Kurt; Wehe, Hillary S.; Seger, Carol A.
2015-01-01
We identified dynamic changes in recruitment of neural connectivity networks across three phases of a flexible rule learning and set-shifting task similar to the Wisconsin Card Sort Task: switching, rule learning via hypothesis testing, and rule application. During fMRI scanning, subjects viewed pairs of stimuli that differed across four dimensions (letter, color, size, screen location), chose one stimulus, and received feedback. Subjects were informed that the correct choice was determined by a simple unidimensional rule, for example “choose the blue letter.” Once each rule had been learned and correctly applied for 4-7 trials, subjects were cued via either negative feedback or visual cues to switch to learning a new rule. Task performance was divided into three phases: Switching (first trial after receiving the switch cue), hypothesis testing (subsequent trials through the last error trial), and rule application (correct responding after the rule was learned). We used both univariate analysis to characterize activity occurring within specific regions of the brain, and a multivariate method, constrained principal component analysis for fMRI (fMRI-CPCA), to investigate how distributed regions coordinate to subserve different processes. As hypothesized, switching was subserved by a limbic network including the ventral striatum, thalamus, and parahippocampal gyrus, in conjunction with cortical salience network regions including the anterior cingulate and frontoinsular cortex. Activity in the ventral striatum was associated with switching regardless of how switching was cued; visually cued shifts were associated with additional visual cortical activity. After switching, as subjects moved into the hypothesis testing phase, a broad fronto-parietal-striatal network (associated with the cognitive control, dorsal attention, and salience networks) increased in activity. This network was sensitive to rule learning speed, with greater extended activity for the slowest learning speed late in the time course of learning. As subjects shifted from hypothesis testing to rule application, activity in this network decreased and activity in the somatomotor and default mode networks increased. PMID:26197092
Neural networks supporting switching, hypothesis testing, and rule application.
Liu, Zhiya; Braunlich, Kurt; Wehe, Hillary S; Seger, Carol A
2015-10-01
We identified dynamic changes in recruitment of neural connectivity networks across three phases of a flexible rule learning and set-shifting task similar to the Wisconsin Card Sort Task: switching, rule learning via hypothesis testing, and rule application. During fMRI scanning, subjects viewed pairs of stimuli that differed across four dimensions (letter, color, size, screen location), chose one stimulus, and received feedback. Subjects were informed that the correct choice was determined by a simple unidimensional rule, for example "choose the blue letter". Once each rule had been learned and correctly applied for 4-7 trials, subjects were cued via either negative feedback or visual cues to switch to learning a new rule. Task performance was divided into three phases: Switching (first trial after receiving the switch cue), hypothesis testing (subsequent trials through the last error trial), and rule application (correct responding after the rule was learned). We used both univariate analysis to characterize activity occurring within specific regions of the brain, and a multivariate method, constrained principal component analysis for fMRI (fMRI-CPCA), to investigate how distributed regions coordinate to subserve different processes. As hypothesized, switching was subserved by a limbic network including the ventral striatum, thalamus, and parahippocampal gyrus, in conjunction with cortical salience network regions including the anterior cingulate and frontoinsular cortex. Activity in the ventral striatum was associated with switching regardless of how switching was cued; visually cued shifts were associated with additional visual cortical activity. After switching, as subjects moved into the hypothesis testing phase, a broad fronto-parietal-striatal network (associated with the cognitive control, dorsal attention, and salience networks) increased in activity. This network was sensitive to rule learning speed, with greater extended activity for the slowest learning speed late in the time course of learning. As subjects shifted from hypothesis testing to rule application, activity in this network decreased and activity in the somatomotor and default mode networks increased. Copyright © 2015 Elsevier Ltd. All rights reserved.
Margolis, Alvaro; Parboosingh, John
2015-01-01
Prior interpersonal relationships and interactivity among members of professional associations may impact the learning process in continuing medical education (CME). On the other hand, CME programs that encourage interactivity between participants may impact structures and behaviors in these professional associations. With the advent of information and communication technologies, new communication spaces have emerged that have the potential to enhance networked learning in national and international professional associations and increase the effectiveness of CME for health professionals. In this article, network science, based on the application of network theory and other theories, is proposed as an approach to better understand the contribution networking and interactivity between health professionals in professional communities make to their learning and adoption of new practices over time. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.
Building a Virtual Learning Network for Teachers in a Suburban School District
ERIC Educational Resources Information Center
Kurtzworth-Keen, Kristin A.
2011-01-01
Emerging research indicates that learning management systems such as Moodle can function as virtual, collaborative environments, where collegial interactions promote professional learning opportunities. This study deployed a mixed methods design in order to describe and analyze teacher participation in a virtual learning network (VLN) that was…
ERIC Educational Resources Information Center
Edwards, Frances
2012-01-01
Increasingly school change processes are being facilitated through the formation and operation of groups of teachers working together for improved student outcomes. These groupings are variously referred to as networks, networked learning communities, communities of practice, professional learning communities, learning circles or clusters. The…
Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho
2017-03-01
Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction
Spencer, Matt; Eickholt, Jesse; Cheng, Jianlin
2014-01-01
Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80% and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test data set of 198 proteins, achieving a Q3 accuracy of 80.7% and a Sov accuracy of 74.2%. PMID:25750595
A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction.
Spencer, Matt; Eickholt, Jesse; Jianlin Cheng
2015-01-01
Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80 percent and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test dataset of 198 proteins, achieving a Q3 accuracy of 80.7 percent and a Sov accuracy of 74.2 percent.
Reinforcement learning for routing in cognitive radio ad hoc networks.
Al-Rawi, Hasan A A; Yau, Kok-Lim Alvin; Mohamad, Hafizal; Ramli, Nordin; Hashim, Wahidah
2014-01-01
Cognitive radio (CR) enables unlicensed users (or secondary users, SUs) to sense for and exploit underutilized licensed spectrum owned by the licensed users (or primary users, PUs). Reinforcement learning (RL) is an artificial intelligence approach that enables a node to observe, learn, and make appropriate decisions on action selection in order to maximize network performance. Routing enables a source node to search for a least-cost route to its destination node. While there have been increasing efforts to enhance the traditional RL approach for routing in wireless networks, this research area remains largely unexplored in the domain of routing in CR networks. This paper applies RL in routing and investigates the effects of various features of RL (i.e., reward function, exploitation, and exploration, as well as learning rate) through simulation. New approaches and recommendations are proposed to enhance the features in order to improve the network performance brought about by RL to routing. Simulation results show that the RL parameters of the reward function, exploitation, and exploration, as well as learning rate, must be well regulated, and the new approaches proposed in this paper improves SUs' network performance without significantly jeopardizing PUs' network performance, specifically SUs' interference to PUs.
Reinforcement Learning for Routing in Cognitive Radio Ad Hoc Networks
Al-Rawi, Hasan A. A.; Mohamad, Hafizal; Hashim, Wahidah
2014-01-01
Cognitive radio (CR) enables unlicensed users (or secondary users, SUs) to sense for and exploit underutilized licensed spectrum owned by the licensed users (or primary users, PUs). Reinforcement learning (RL) is an artificial intelligence approach that enables a node to observe, learn, and make appropriate decisions on action selection in order to maximize network performance. Routing enables a source node to search for a least-cost route to its destination node. While there have been increasing efforts to enhance the traditional RL approach for routing in wireless networks, this research area remains largely unexplored in the domain of routing in CR networks. This paper applies RL in routing and investigates the effects of various features of RL (i.e., reward function, exploitation, and exploration, as well as learning rate) through simulation. New approaches and recommendations are proposed to enhance the features in order to improve the network performance brought about by RL to routing. Simulation results show that the RL parameters of the reward function, exploitation, and exploration, as well as learning rate, must be well regulated, and the new approaches proposed in this paper improves SUs' network performance without significantly jeopardizing PUs' network performance, specifically SUs' interference to PUs. PMID:25140350
An Adaptive Resonance Theory account of the implicit learning of orthographic word forms.
Glotin, H; Warnier, P; Dandurand, F; Dufau, S; Lété, B; Touzet, C; Ziegler, J C; Grainger, J
2010-01-01
An Adaptive Resonance Theory (ART) network was trained to identify unique orthographic word forms. Each word input to the model was represented as an unordered set of ordered letter pairs (open bigrams) that implement a flexible prelexical orthographic code. The network learned to map this prelexical orthographic code onto unique word representations (orthographic word forms). The network was trained on a realistic corpus of reading textbooks used in French primary schools. The amount of training was strictly identical to children's exposure to reading material from grade 1 to grade 5. Network performance was examined at each grade level. Adjustment of the learning and vigilance parameters of the network allowed us to reproduce the developmental growth of word identification performance seen in children. The network exhibited a word frequency effect and was found to be sensitive to the order of presentation of word inputs, particularly with low frequency words. These words were better learned with a randomized presentation order compared with the order of presentation in the school books. These results open up interesting perspectives for the application of ART networks in the study of the dynamics of learning to read. 2009 Elsevier Ltd. All rights reserved.
Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias
2008-12-01
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.
NASA Astrophysics Data System (ADS)
Mills, Kyle; Tamblyn, Isaac
2018-03-01
We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.
ERIC Educational Resources Information Center
Campana, Joe
2014-01-01
Informal learning networks play a key role in the skill and professional development of professionals, working in micro-businesses within Australia's digital media industry, as they do not have access to learning and development or human resources sections that can assist in mapping their learning pathway. Professionals working in this environment…
ERIC Educational Resources Information Center
Gu, Xiaoqing; Ding, Rui; Fu, Shirong
2011-01-01
Senior citizens are comparatively vulnerable in accessing learning opportunities offered on the Internet due to usability problems in current web design. In an effort to build a senior-friendly learning web as a part of the Life-long Learning Network in Shanghai, usability studies of two websites currently available to Shanghai senior citizens…
ERIC Educational Resources Information Center
Manning, Christin
2013-01-01
Workers in the 21st century workplace are faced with rapid and constant developments that place a heavy demand on them to continually learn beyond what the Human Resources and Training groups can meet. As a consequence, professionals must rely on non-formal learning approaches through the development of a personal learning network to keep…
Social Networking Tools and Teacher Education Learning Communities: A Case Study
ERIC Educational Resources Information Center
Poulin, Michael T.
2014-01-01
Social networking tools have become an integral part of a pre-service teacher's educational experience. As a result, the educational value of social networking tools in teacher preparation programs must be examined. The specific problem addressed in this study is that the role of social networking tools in teacher education learning communities…
ERIC Educational Resources Information Center
Lecluijze, Suzanne Elisabeth; de Haan, Mariëtte; Ünlüsoy, Asli
2015-01-01
This exploratory study examines ethno-cultural diversity in youth's narratives regarding their "online" learning experiences while also investigating how these narratives can be understood from the analysis of their online network structure and composition. Based on ego-network data of 79 respondents this study compared the…
Social Network Analysis in E-Learning Environments: A Preliminary Systematic Review
ERIC Educational Resources Information Center
Cela, Karina L.; Sicilia, Miguel Ángel; Sánchez, Salvador
2015-01-01
E-learning occupies an increasingly prominent place in education. It provides the learner with a rich virtual network where he or she can exchange ideas and information and create synergies through interactions with other members of the network, whether fellow learners or teachers. Social network analysis (SNA) has proven extremely powerful at…
ERIC Educational Resources Information Center
Hwang, Wu-Yuin; Kongcharoen, Chaknarin; Ghinea, Gheorghita
2014-01-01
Recently, various computer networking courses have included additional laboratory classes in order to enhance students' learning achievement. However, these classes need to establish a suitable laboratory where each student can connect network devices to configure and test functions within different network topologies. In this case, the Linux…
Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links.
Sardi, Shira; Vardi, Roni; Goldental, Amir; Sheinin, Anton; Uzan, Herut; Kanter, Ido
2018-03-23
Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is significantly larger. The nodal, neuronal, fast adaptation follows its relative anisotropic (dendritic) input timings, as indicated experimentally, similarly to the slow learning mechanism currently attributed to the links, synapses. It represents a non-local learning rule, where effectively many incoming links to a node concurrently undergo the same adaptation. The network dynamics is now counterintuitively governed by the weak links, which previously were assumed to be insignificant. This cooperative nonlinear dynamic adaptation presents a self-controlled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to learning by links, and also supports self-oscillations of the effective learning parameters. It hints on a hierarchical computational complexity of nodes, following their number of anisotropic inputs and opens new horizons for advanced deep learning algorithms and artificial intelligence based applications, as well as a new mechanism for enhanced and fast learning by neural networks.
Detection of eardrum abnormalities using ensemble deep learning approaches
NASA Astrophysics Data System (ADS)
Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yua, Lianbo; Gurcan, Metin N.
2018-02-01
In this study, we proposed an approach to report the condition of the eardrum as "normal" or "abnormal" by ensembling two different deep learning architectures. In the first network (Network 1), we applied transfer learning to the Inception V3 network by using 409 labeled samples. As a second network (Network 2), we designed a convolutional neural network to take advantage of auto-encoders by using additional 673 unlabeled eardrum samples. The individual classification accuracies of the Network 1 and Network 2 were calculated as 84.4%(+/- 12.1%) and 82.6% (+/- 11.3%), respectively. Only 32% of the errors of the two networks were the same, making it possible to combine two approaches to achieve better classification accuracy. The proposed ensemble method allows us to achieve robust classification because it has high accuracy (84.4%) with the lowest standard deviation (+/- 10.3%).
QSAR modelling using combined simple competitive learning networks and RBF neural networks.
Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E
2018-04-01
The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.
Gilra, Aditya; Gerstner, Wulfram
2017-11-27
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network
Gerstner, Wulfram
2017-01-01
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically. PMID:29173280
Supervised Learning Using Spike-Timing-Dependent Plasticity of Memristive Synapses.
Nishitani, Yu; Kaneko, Yukihiro; Ueda, Michihito
2015-12-01
We propose a supervised learning model that enables error backpropagation for spiking neural network hardware. The method is modeled by modifying an existing model to suit the hardware implementation. An example of a network circuit for the model is also presented. In this circuit, a three-terminal ferroelectric memristor (3T-FeMEM), which is a field-effect transistor with a gate insulator composed of ferroelectric materials, is used as an electric synapse device to store the analog synaptic weight. Our model can be implemented by reflecting the network error to the write voltage of the 3T-FeMEMs and introducing a spike-timing-dependent learning function to the device. An XOR problem was successfully demonstrated as a benchmark learning by numerical simulations using the circuit properties to estimate the learning performance. In principle, the learning time per step of this supervised learning model and the circuit is independent of the number of neurons in each layer, promising a high-speed and low-power calculation in large-scale neural networks.
ERIC Educational Resources Information Center
AlShoaibi, Rana; Shukri, Nadia
2017-01-01
The major aim of this study is to better understand the university students' perceptions and attitudes towards using social network sites for learning English as well as to identify if there is a difference between male and female university students in terms of using social networking sites for learning English inside and outside the classroom.…
Deep Gate Recurrent Neural Network
2016-11-22
Schmidhuber. A system for robotic heart surgery that learns to tie knots using recurrent neural networks. In IEEE International Conference on...tasks, such as Machine Translation (Bahdanau et al. (2015)) or Robot Reinforcement Learning (Bakker (2001)). The main idea behind these networks is to...and J. Peters. Reinforcement learning in robotics : A survey. The International Journal of Robotics Research, 32:1238–1274, 2013. ISSN 0278-3649. doi
Discriminative Learning with Markov Logic Networks
2009-10-01
Discriminative Learning with Markov Logic Networks Tuyen N. Huynh Department of Computer Sciences University of Texas at Austin Austin, TX 78712...emerging area of research that addresses the problem of learning from noisy structured/relational data. Markov logic networks (MLNs), sets of weighted...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Texas at Austin,Department of Computer
ERIC Educational Resources Information Center
Giere, Ursula, Ed.; Imel, Susan, Ed.
This publication contains the story of how the idea for a network conceived through CONFINTEA V became a [virtual] reality in ALADIN, the Adult Learning Documentation and Information Network. Part I contains 15 papers delivered as a part of the CONFINTEA workshop, "Global Community of Adult Learning through Information and Documentation:…
Jeng, J T; Lee, T T
2000-01-01
A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.
Zhao, Yu; Ge, Fangfei; Liu, Tianming
2018-07-01
fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.
Query-based learning for aerospace applications.
Saad, E W; Choi, J J; Vian, J L; Wunsch, D C Ii
2003-01-01
Models of real-world applications often include a large number of parameters with a wide dynamic range, which contributes to the difficulties of neural network training. Creating the training data set for such applications becomes costly, if not impossible. In order to overcome the challenge, one can employ an active learning technique known as query-based learning (QBL) to add performance-critical data to the training set during the learning phase, thereby efficiently improving the overall learning/generalization. The performance-critical data can be obtained using an inverse mapping called network inversion (discrete network inversion and continuous network inversion) followed by oracle query. This paper investigates the use of both inversion techniques for QBL learning, and introduces an original heuristic to select the inversion target values for continuous network inversion method. Efficiency and generalization was further enhanced by employing node decoupled extended Kalman filter (NDEKF) training and a causality index (CI) as a means to reduce the input search dimensionality. The benefits of the overall QBL approach are experimentally demonstrated in two aerospace applications: a classification problem with large input space and a control distribution problem.
Investigating student communities with network analysis of interactions in a physics learning center
NASA Astrophysics Data System (ADS)
Brewe, Eric; Kramer, Laird; Sawtelle, Vashti
2012-06-01
Developing a sense of community among students is one of the three pillars of an overall reform effort to increase participation in physics, and the sciences more broadly, at Florida International University. The emergence of a research and learning community, embedded within a course reform effort, has contributed to increased recruitment and retention of physics majors. We utilize social network analysis to quantify interactions in Florida International University’s Physics Learning Center (PLC) that support the development of academic and social integration. The tools of social network analysis allow us to visualize and quantify student interactions and characterize the roles of students within a social network. After providing a brief introduction to social network analysis, we use sequential multiple regression modeling to evaluate factors that contribute to participation in the learning community. Results of the sequential multiple regression indicate that the PLC learning community is an equitable environment as we find that gender and ethnicity are not significant predictors of participation in the PLC. We find that providing students space for collaboration provides a vital element in the formation of a supportive learning community.
Evolution of Associative Learning in Chemical Networks
McGregor, Simon; Vasas, Vera; Husbands, Phil; Fernando, Chrisantha
2012-01-01
Organisms that can learn about their environment and modify their behaviour appropriately during their lifetime are more likely to survive and reproduce than organisms that do not. While associative learning – the ability to detect correlated features of the environment – has been studied extensively in nervous systems, where the underlying mechanisms are reasonably well understood, mechanisms within single cells that could allow associative learning have received little attention. Here, using in silico evolution of chemical networks, we show that there exists a diversity of remarkably simple and plausible chemical solutions to the associative learning problem, the simplest of which uses only one core chemical reaction. We then asked to what extent a linear combination of chemical concentrations in the network could approximate the ideal Bayesian posterior of an environment given the stimulus history so far? This Bayesian analysis revealed the ‘memory traces’ of the chemical network. The implication of this paper is that there is little reason to believe that a lack of suitable phenotypic variation would prevent associative learning from evolving in cell signalling, metabolic, gene regulatory, or a mixture of these networks in cells. PMID:23133353
Motor Imagery Learning Modulates Functional Connectivity of Multiple Brain Systems in Resting State
Zhang, Hang; Long, Zhiying; Ge, Ruiyang; Xu, Lele; Jin, Zhen; Yao, Li; Liu, Yijun
2014-01-01
Background Learning motor skills involves subsequent modulation of resting-state functional connectivity in the sensory-motor system. This idea was mostly derived from the investigations on motor execution learning which mainly recruits the processing of sensory-motor information. Behavioral evidences demonstrated that motor skills in our daily lives could be learned through imagery procedures. However, it remains unclear whether the modulation of resting-state functional connectivity also exists in the sensory-motor system after motor imagery learning. Methodology/Principal Findings We performed a fMRI investigation on motor imagery learning from resting state. Based on previous studies, we identified eight sensory and cognitive resting-state networks (RSNs) corresponding to the brain systems and further explored the functional connectivity of these RSNs through the assessments, connectivity and network strengths before and after the two-week consecutive learning. Two intriguing results were revealed: (1) The sensory RSNs, specifically sensory-motor and lateral visual networks exhibited greater connectivity strengths in precuneus and fusiform gyrus after learning; (2) Decreased network strength induced by learning was proved in the default mode network, a cognitive RSN. Conclusions/Significance These results indicated that resting-state functional connectivity could be modulated by motor imagery learning in multiple brain systems, and such modulation displayed in the sensory-motor, visual and default brain systems may be associated with the establishment of motor schema and the regulation of introspective thought. These findings further revealed the neural substrates underlying motor skill learning and potentially provided new insights into the therapeutic benefits of motor imagery learning. PMID:24465577
NASA Astrophysics Data System (ADS)
Tang, Tian
The following dissertation explains how technological change of wind power, in terms of cost reduction and performance improvement, is achieved in China and the US through energy policies, technological learning, and collaboration. The objective of this dissertation is to understand how energy policies affect key actors in the power sector to promote renewable energy and achieve cost reductions for climate change mitigation in different institutional arrangements. The dissertation consists of three essays. The first essay examines the learning processes and technological change of wind power in China. I integrate collaboration and technological learning theories to model how wind technologies are acquired and diffused among various wind project participants in China through the Clean Development Mechanism (CDM)--an international carbon trade program, and empirically test whether different learning channels lead to cost reduction of wind power. Using pooled cross-sectional data of Chinese CDM wind projects and spatial econometric models, I find that a wind project developer's previous experience (learning-by-doing) and industrywide wind project experience (spillover effect) significantly reduce the costs of wind power. The spillover effect provides justification for subsidizing users of wind technologies so as to offset wind farm investors' incentive to free-ride on knowledge spillovers from other wind energy investors. The CDM has played such a role in China. Most importantly, this essay provides the first empirical evidence of "learning-by-interacting": CDM also drives wind power cost reduction and performance improvement by facilitating technology transfer through collaboration between foreign turbine manufacturers and local wind farm developers. The second essay extends this learning framework to the US wind power sector, where I examine how state energy policies, restructuring of the electricity market, and learning among actors in wind industry lead to performance improvement of wind farms. Unlike China, the restructuring of the US electricity market created heterogeneity in transmission network governance across regions. Thus, I add transmission network governance to my learning framework to test the impacts of different transmission network governance models. Using panel data of existing utility-scale wind farms in US during 2001-2012 and spatial models, I find that the performance of a wind project is improved through more collaboration among project participants (learning-by-interacting), and this improvement is even greater if the wind project is interconnected to a regional transmission network coordinated by an independent system operator or a regional transmission organization (ISO/RTO). In the third essay, I further explore how different transmission network governance models affect wind power integration through a comparative case study. I compare two regional transmission networks, which represent two major transmission network governance models in the US: the ISO/RTO-governance model and the non-RTO model. Using archival data and interviews with key network participants, I find that a centralized transmission network coordinated through an ISO/RTO is more effective in integrating wind power because it allows resource pooling and optimal allocating of the resources by the central network administrative agency (NAO). The case study also suggests an alternative path to improved network effectiveness for a less cohesive network, which is through more frequent resource exchange among subgroups within a large network. On top of that, this essay contributes to the network governance literature by providing empirical evidence on the coexistence of hierarchy, market, and collaboration in complex service delivery networks. These coordinating mechanisms complement each other to provide system flexibility and stability, particularly when the network operates in a turbulent environment with changes and uncertainties.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.
Burbank, Kendra S
2015-12-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons
Burbank, Kendra S.
2015-01-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks. PMID:26633645
Reinforcement Learning of Two-Joint Virtual Arm Reaching in a Computer Model of Sensorimotor Cortex
Neymotin, Samuel A.; Chadderdon, George L.; Kerr, Cliff C.; Francis, Joseph T.; Lytton, William W.
2014-01-01
Neocortical mechanisms of learning sensorimotor control involve a complex series of interactions at multiple levels, from synaptic mechanisms to cellular dynamics to network connectomics. We developed a model of sensory and motor neocortex consisting of 704 spiking model neurons. Sensory and motor populations included excitatory cells and two types of interneurons. Neurons were interconnected with AMPA/NMDA and GABAA synapses. We trained our model using spike-timing-dependent reinforcement learning to control a two-joint virtual arm to reach to a fixed target. For each of 125 trained networks, we used 200 training sessions, each involving 15 s reaches to the target from 16 starting positions. Learning altered network dynamics, with enhancements to neuronal synchrony and behaviorally relevant information flow between neurons. After learning, networks demonstrated retention of behaviorally relevant memories by using proprioceptive information to perform reach-to-target from multiple starting positions. Networks dynamically controlled which joint rotations to use to reach a target, depending on current arm position. Learning-dependent network reorganization was evident in both sensory and motor populations: learned synaptic weights showed target-specific patterning optimized for particular reach movements. Our model embodies an integrative hypothesis of sensorimotor cortical learning that could be used to interpret future electrophysiological data recorded in vivo from sensorimotor learning experiments. We used our model to make the following predictions: learning enhances synchrony in neuronal populations and behaviorally relevant information flow across neuronal populations, enhanced sensory processing aids task-relevant motor performance and the relative ease of a particular movement in vivo depends on the amount of sensory information required to complete the movement. PMID:24047323
Prefrontal Cortex Networks Shift from External to Internal Modes during Learning.
Brincat, Scott L; Miller, Earl K
2016-09-14
As we learn about items in our environment, their neural representations become increasingly enriched with our acquired knowledge. But there is little understanding of how network dynamics and neural processing related to external information changes as it becomes laden with "internal" memories. We sampled spiking and local field potential activity simultaneously from multiple sites in the lateral prefrontal cortex (PFC) and the hippocampus (HPC)-regions critical for sensory associations-of monkeys performing an object paired-associate learning task. We found that in the PFC, evoked potentials to, and neural information about, external sensory stimulation decreased while induced beta-band (∼11-27 Hz) oscillatory power and synchrony associated with "top-down" or internal processing increased. By contrast, the HPC showed little evidence of learning-related changes in either spiking activity or network dynamics. The results suggest that during associative learning, PFC networks shift their resources from external to internal processing. As we learn about items in our environment, their representations in our brain become increasingly enriched with our acquired "top-down" knowledge. We found that in the prefrontal cortex, but not the hippocampus, processing of external sensory inputs decreased while internal network dynamics related to top-down processing increased. The results suggest that during learning, prefrontal cortex networks shift their resources from external (sensory) to internal (memory) processing. Copyright © 2016 the authors 0270-6474/16/369739-16$15.00/0.
Hybrid computing using a neural network with dynamic external memory.
Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis
2016-10-27
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.
Prefrontal Cortex Networks Shift from External to Internal Modes during Learning
Brincat, Scott L.
2016-01-01
As we learn about items in our environment, their neural representations become increasingly enriched with our acquired knowledge. But there is little understanding of how network dynamics and neural processing related to external information changes as it becomes laden with “internal” memories. We sampled spiking and local field potential activity simultaneously from multiple sites in the lateral prefrontal cortex (PFC) and the hippocampus (HPC)—regions critical for sensory associations—of monkeys performing an object paired-associate learning task. We found that in the PFC, evoked potentials to, and neural information about, external sensory stimulation decreased while induced beta-band (∼11–27 Hz) oscillatory power and synchrony associated with “top-down” or internal processing increased. By contrast, the HPC showed little evidence of learning-related changes in either spiking activity or network dynamics. The results suggest that during associative learning, PFC networks shift their resources from external to internal processing. SIGNIFICANCE STATEMENT As we learn about items in our environment, their representations in our brain become increasingly enriched with our acquired “top-down” knowledge. We found that in the prefrontal cortex, but not the hippocampus, processing of external sensory inputs decreased while internal network dynamics related to top-down processing increased. The results suggest that during learning, prefrontal cortex networks shift their resources from external (sensory) to internal (memory) processing. PMID:27629722
Recommending Peers for Learning: Matching on Dissimilarity in Interpretations to Provoke Breakdown
ERIC Educational Resources Information Center
Rajagopal, Kamakshi; van Bruggen, Jan M.; Sloep, Peter B.
2017-01-01
People recommenders are a widespread feature of social networking sites and educational social learning platforms alike. However, when these systems are used to extend learners' Personal Learning Networks, they often fall short of providing recommendations of learning value to their users. This paper proposes a design of a people recommender based…
Unlocking the Potential of Urban Communities: Case Studies of Twelve Learning Cities
ERIC Educational Resources Information Center
Valdés-Cotera, Raúl, Ed.; Longworth, Norman, Ed.; Lunardon, Katharina, Ed.; Wang, Mo, Ed.; Jo, Sunok, Ed.; Crowe, Sinéad, Ed.
2015-01-01
UNESCO established the UNESCO Global Network of Learning Cities (GNLC) to encourage the development of learning cities. By providing technical support, capacity development, and a platform where members can share ideas on policies and best practice, this international exchange network helps urban communities create thriving learning cities. The…
Machine learning vortices at the Kosterlitz-Thouless transition
NASA Astrophysics Data System (ADS)
Beach, Matthew J. S.; Golubeva, Anna; Melko, Roger G.
2018-01-01
Efficient and automated classification of phases from minimally processed data is one goal of machine learning in condensed-matter and statistical physics. Supervised algorithms trained on raw samples of microstates can successfully detect conventional phase transitions via learning a bulk feature such as an order parameter. In this paper, we investigate whether neural networks can learn to classify phases based on topological defects. We address this question on the two-dimensional classical XY model which exhibits a Kosterlitz-Thouless transition. We find significant feature engineering of the raw spin states is required to convincingly claim that features of the vortex configurations are responsible for learning the transition temperature. We further show a single-layer network does not correctly classify the phases of the XY model, while a convolutional network easily performs classification by learning the global magnetization. Finally, we design a deep network capable of learning vortices without feature engineering. We demonstrate the detection of vortices does not necessarily result in the best classification accuracy, especially for lattices of less than approximately 1000 spins. For larger systems, it remains a difficult task to learn vortices.
Synchronization of heteroclinic circuits through learning in coupled neural networks
NASA Astrophysics Data System (ADS)
Selskii, Anton; Makarov, Valeri A.
2016-01-01
The synchronization of oscillatory activity in neural networks is usually implemented by coupling the state variables describing neuronal dynamics. Here we study another, but complementary mechanism based on a learning process with memory. A driver network, acting as a teacher, exhibits winner-less competition (WLC) dynamics, while a driven network, a learner, tunes its internal couplings according to the oscillations observed in the teacher. We show that under appropriate training the learner can "copy" the coupling structure and thus synchronize oscillations with the teacher. The replication of the WLC dynamics occurs for intermediate memory lengths only, consequently, the learner network exhibits a phenomenon of learning resonance.
Person re-identification over camera networks using multi-task distance metric learning.
Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng
2014-08-01
Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.
A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.
Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi
2015-12-01
Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.
Learning in stochastic neural networks for constraint satisfaction problems
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Adorf, Hans-Martin
1989-01-01
Researchers describe a newly-developed artificial neural network algorithm for solving constraint satisfaction problems (CSPs) which includes a learning component that can significantly improve the performance of the network from run to run. The network, referred to as the Guarded Discrete Stochastic (GDS) network, is based on the discrete Hopfield network but differs from it primarily in that auxiliary networks (guards) are asymmetrically coupled to the main network to enforce certain types of constraints. Although the presence of asymmetric connections implies that the network may not converge, it was found that, for certain classes of problems, the network often quickly converges to find satisfactory solutions when they exist. The network can run efficiently on serial machines and can find solutions to very large problems (e.g., N-queens for N as large as 1024). One advantage of the network architecture is that network connection strengths need not be instantiated when the network is established: they are needed only when a participating neural element transitions from off to on. They have exploited this feature to devise a learning algorithm, based on consistency techniques for discrete CSPs, that updates the network biases and connection strengths and thus improves the network performance.
The ASE Improving Practical Work in Triple Science Learning Skills Network
ERIC Educational Resources Information Center
Barber, Paul; Chapman, Georgina; Ellis-Sackey, Cecilia; Grainger, Beth; Jones, Steve
2011-01-01
In July 2010, the Association for Science Education won a bid to run a "Sharing innovation network" for the Triple Science Support Programme, which is delivered by the Learning Skills Network on behalf of the Department for Education. The network involves schools from the London boroughs of Tower Hamlets and Greenwich. In this article,…
ERIC Educational Resources Information Center
West, Patti; Rutstein, Daisy Wise; Mislevy, Robert J.; Liu, Junhui; Choi, Younyoung; Levy, Roy; Crawford, Aaron; DiCerbo, Kristen E.; Chappel, Kristina; Behrens, John T.
2010-01-01
A major issue in the study of learning progressions (LPs) is linking student performance on assessment tasks to the progressions. This report describes the challenges faced in making this linkage using Bayesian networks to model LPs in the field of computer networking. The ideas are illustrated with exemplar Bayesian networks built on Cisco…
Network Training for a Boy with Learning Disabilities and Behaviours That Challenge
ERIC Educational Resources Information Center
Cooper, Kate; McElwee, Jennifer
2016-01-01
Background: Network Training is an intervention that draws upon systemic ideas and behavioural principles to promote positive change in networks of support for people defined as having a learning disability. To date, there are no published case studies looking at the outcomes of Network Training. Materials and Methods: This study aimed to…
Designing a Self-Contained Group Area Network for Ubiquitous Learning
ERIC Educational Resources Information Center
Chen, Nian-Shing; Kinshuk; Wei, Chun-Wang; Yang, Stephen J. H.
2008-01-01
A number of studies have evidenced that handheld devices are appropriate tools to facilitate face-to-face collaborative learning effectively because of the possibility of ample social interactions. Group Area Network, or GroupNet, proposed in this paper, uses handheld devices to fill the gap between Local Area Network and Body Area Network.…
ERIC Educational Resources Information Center
Cook, John; Santos, Patricia
2014-01-01
In this paper, we argue that there is much that we can learn from the past as we explore the issues raised when designing innovative social media and mobile technologies for learning. Like the social networking that took place in coffee houses in the 1600s, the Internet-enabled social networks of today stand accused of being the so-called…
Inter-firm Networks, Organizational Learning and Knowledge Updating: An Empirical Study
NASA Astrophysics Data System (ADS)
Zhang, Su-rong; Wang, Wen-ping
In the era of knowledge-based economy which information technology develops rapidly, the rate of knowledge updating has become a critical factor for enterprises to gaining competitive advantage .We build an interactional theoretical model among inter-firm networks, organizational learning and knowledge updating thereby and demonstrate it with empirical study at last. The result shows that inter-firm networks and organizational learning is the source of knowledge updating.
Learning characteristics of a space-time neural network as a tether skiprope observer
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles
1993-01-01
The Software Technology Laboratory at the Johnson Space Center is testing a Space Time Neural Network (STNN) for observing tether oscillations present during retrieval of a tethered satellite. Proper identification of tether oscillations, known as 'skiprope' motion, is vital to safe retrieval of the tethered satellite. Our studies indicate that STNN has certain learning characteristics that must be understood properly to utilize this type of neural network for the tethered satellite problem. We present our findings on the learning characteristics including a learning rate versus momentum performance table.
Learning characteristics of a space-time neural network as a tether skiprope observer
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles
1992-01-01
The Software Technology Laboratory at JSC is testing a Space Time Neural Network (STNN) for observing tether oscillations present during retrieval of a tethered satellite. Proper identification of tether oscillations, known as 'skiprope' motion, is vital to safe retrieval of the tethered satellite. Our studies indicate that STNN has certain learning characteristics that must be understood properly to utilize this type of neural network for the tethered satellite problem. We present our findings on the learning characteristics including a learning rate versus momentum performance table.
ICADx: interpretable computer aided diagnosis of breast masses
NASA Astrophysics Data System (ADS)
Kim, Seong Tae; Lee, Hakmin; Kim, Hak Gu; Ro, Yong Man
2018-02-01
In this study, a novel computer aided diagnosis (CADx) framework is devised to investigate interpretability for classifying breast masses. Recently, a deep learning technology has been successfully applied to medical image analysis including CADx. Existing deep learning based CADx approaches, however, have a limitation in explaining the diagnostic decision. In real clinical practice, clinical decisions could be made with reasonable explanation. So current deep learning approaches in CADx are limited in real world deployment. In this paper, we investigate interpretability in CADx with the proposed interpretable CADx (ICADx) framework. The proposed framework is devised with a generative adversarial network, which consists of interpretable diagnosis network and synthetic lesion generative network to learn the relationship between malignancy and a standardized description (BI-RADS). The lesion generative network and the interpretable diagnosis network compete in an adversarial learning so that the two networks are improved. The effectiveness of the proposed method was validated on public mammogram database. Experimental results showed that the proposed ICADx framework could provide the interpretability of mass as well as mass classification. It was mainly attributed to the fact that the proposed method was effectively trained to find the relationship between malignancy and interpretations via the adversarial learning. These results imply that the proposed ICADx framework could be a promising approach to develop the CADx system.
Learning Orthographic Structure With Sequential Generative Neural Networks.
Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco
2016-04-01
Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.
Ammenwerth, Elske; Hackl, Werner O
2017-01-01
Learning as a constructive process works best in interaction with other learners. Support of social interaction processes is a particular challenge within online learning settings due to the spatial and temporal distribution of participants. It should thus be carefully monitored. We present structural network analysis and related indicators to analyse and visualize interaction patterns of participants in online learning settings. We validate this approach in two online courses and show how the visualization helps to monitor interaction and to identify activity profiles of learners. Structural network analysis is a feasible approach for an analysis of the intensity and direction of interaction in online learning settings.
Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; ...
2015-01-31
Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plansmore » in terms of average delay, number of stops, and vehicular emissions at the network level.« less
Tonelli, Paul; Mouret, Jean-Baptiste
2013-01-01
A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities. PMID:24236099
Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success
Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.
2013-01-01
The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625
Mocanu, Decebal Constantin; Mocanu, Elena; Stone, Peter; Nguyen, Phuong H; Gibescu, Madeleine; Liotta, Antonio
2018-06-19
Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.
Deep Learning: A Primer for Radiologists.
Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An
2017-01-01
Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.
Carré, Clément; Mas, André; Krouk, Gabriel
2017-01-01
Inferring transcriptional gene regulatory networks from transcriptomic datasets is a key challenge of systems biology, with potential impacts ranging from medicine to agronomy. There are several techniques used presently to experimentally assay transcription factors to target relationships, defining important information about real gene regulatory networks connections. These techniques include classical ChIP-seq, yeast one-hybrid, or more recently, DAP-seq or target technologies. These techniques are usually used to validate algorithm predictions. Here, we developed a reverse engineering approach based on mathematical and computer simulation to evaluate the impact that this prior knowledge on gene regulatory networks may have on training machine learning algorithms. First, we developed a gene regulatory networks-simulating engine called FRANK (Fast Randomizing Algorithm for Network Knowledge) that is able to simulate large gene regulatory networks (containing 10 4 genes) with characteristics of gene regulatory networks observed in vivo. FRANK also generates stable or oscillatory gene expression directly produced by the simulated gene regulatory networks. The development of FRANK leads to important general conclusions concerning the design of large and stable gene regulatory networks harboring scale free properties (built ex nihilo). In combination with supervised (accepting prior knowledge) support vector machine algorithm we (i) address biologically oriented questions concerning our capacity to accurately reconstruct gene regulatory networks and in particular we demonstrate that prior-knowledge structure is crucial for accurate learning, and (ii) draw conclusions to inform experimental design to performed learning able to solve gene regulatory networks in the future. By demonstrating that our predictions concerning the influence of the prior-knowledge structure on support vector machine learning capacity holds true on real data ( Escherichia coli K14 network reconstruction using network and transcriptomic data), we show that the formalism used to build FRANK can to some extent be a reasonable model for gene regulatory networks in real cells.
Analysis and Visualization of Relations in eLearning
NASA Astrophysics Data System (ADS)
Dráždilová, Pavla; Obadi, Gamila; Slaninová, Kateřina; Martinovič, Jan; Snášel, Václav
The popularity of eLearning systems is growing rapidly; this growth is enabled by the consecutive development in Internet and multimedia technologies. Web-based education became wide spread in the past few years. Various types of learning management systems facilitate development of Web-based courses. Users of these courses form social networks through the different activities performed by them. This chapter focuses on searching the latent social networks in eLearning systems data. These data consist of students activity records wherein latent ties among actors are embedded. The social network studied in this chapter is represented by groups of students who have similar contacts and interact in similar social circles. Different methods of data clustering analysis can be applied to these groups, and the findings show the existence of latent ties among the group members. The second part of this chapter focuses on social network visualization. Graphical representation of social network can describe its structure very efficiently. It can enable social network analysts to determine the network degree of connectivity. Analysts can easily determine individuals with a small or large amount of relationships as well as the amount of independent groups in a given network. When applied to the field of eLearning, data visualization simplifies the process of monitoring the study activities of individuals or groups, as well as the planning of educational curriculum, the evaluation of study processes, etc.
Lessons Learned and Lessons To Be Learned: An Overview of Innovative Network Learning Environments.
ERIC Educational Resources Information Center
Jacobson, Michael J.; Jacobson, Phoebe Chen
This paper provides an overview of five innovative projects involving network learning technologies in the United States: (1) the MicroObservatory Internet Telescope is a collection of small, high-quality, and low-maintenance telescopes operated by the Harvard-Smithsonian Center for Astrophysics (Massachusetts), which may be used remotely via the…
ERIC Educational Resources Information Center
Zhou, Xiaokang; Chen, Jian; Wu, Bo; Jin, Qun
2014-01-01
With the high development of social networks, collaborations in a socialized web-based learning environment has become increasing important, which means people can learn through interactions and collaborations in communities across social networks. In this study, in order to support the enhanced collaborative learning, two important factors, user…
Social Networks-Based Adaptive Pairing Strategy for Cooperative Learning
ERIC Educational Resources Information Center
Chuang, Po-Jen; Chiang, Ming-Chao; Yang, Chu-Sing; Tsai, Chun-Wei
2012-01-01
In this paper, we propose a grouping strategy to enhance the learning and testing results of students, called Pairing Strategy (PS). The proposed method stems from the need of interactivity and the desire of cooperation in cooperative learning. Based on the social networks of students, PS provides members of the groups to learn from or mimic…
Paradoxes of Social Networking in a Structured Web 2.0 Language Learning Community
ERIC Educational Resources Information Center
Loiseau, Mathieu; Zourou, Katerina
2012-01-01
This paper critically inquires into social networking as a set of mechanisms and associated practices developed in a structured Web 2.0 language learning community. This type of community can be roughly described as learning spaces featuring (more or less) structured language learning resources displaying at least some notions of language learning…
ERIC Educational Resources Information Center
Jarvela, Sanna; Naykki, Piia; Laru, Jari; Luokkanen, Tiina
2007-01-01
In our recent research we have explored possibilities to scaffold collaborative learning in higher education with wireless networks and mobile tools. The pedagogical ideas are grounded on concepts of collaborative learning, including the socially shared origin of cognition, as well as self-regulated learning theory. This paper presents our three…
Effects of the ISIS Recommender System for Navigation Support in Self-Organised Learning Networks
ERIC Educational Resources Information Center
Drachsler, Hendrik; Hummel, Hans; van den Berg, Bert; Eshuis, Jannes; Waterink, Wim; Nadolski, Rob; Berlanga, Adriana; Boers, Nanda; Koper, Rob
2009-01-01
The need to support users of the Internet with the selection of information is becoming more important. Learners in complex, self-organising Learning Networks have similar problems and need guidance to find and select most suitable learning activities, in order to attain their lifelong learning goals in the most efficient way. Several research…
Creating and Sustaining Inquiry Spaces for Teacher Learning and System Transformation
ERIC Educational Resources Information Center
Kaser, Linda; Halbert, Judy
2014-01-01
Over a 15-year period, one Western Canadian province, British Columbia, has been exploring the potential of inquiry learning networks to deepen teacher professional learning and to influence the system as a whole. During this time, we have learned a great deal about shifting practice through inquiry networks. In this article, we provide a…
ERIC Educational Resources Information Center
Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.
2010-01-01
Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…
Enhancing Formal E-Learning with Edutainment on Social Networks
ERIC Educational Resources Information Center
Labus, A.; Despotovic-Zrakic, M.; Radenkovic, B.; Bogdanovic, Z.; Radenkovic, M.
2015-01-01
This paper reports on the investigation of the possibilities of enhancing the formal e-learning process by harnessing the potential of informal game-based learning on social networks. The goal of the research is to improve the outcomes of the formal learning process through the design and implementation of an educational game on a social network…
Using Social Networks to Enhance Teaching and Learning Experiences in Higher Learning Institutions
ERIC Educational Resources Information Center
Balakrishnan, Vimala
2014-01-01
The paper first explores the factors that affect the use of social networks to enhance teaching and learning experiences among students and lecturers, using structured questionnaires prepared based on the Push-Pull-Mooring framework. A total of 455 students and lecturers from higher learning institutions in Malaysia participated in this study.…
Assessment of Learning in Digital Interactive Social Networks: A Learning Analytics Approach
ERIC Educational Resources Information Center
Wilson, Mark; Gochyyev, Perman; Scalise, Kathleen
2016-01-01
This paper summarizes initial field-test results from data analytics used in the work of the Assessment and Teaching of 21st Century Skills (ATC21S) project, on the "ICT Literacy--Learning in digital networks" learning progression. This project, sponsored by Cisco, Intel and Microsoft, aims to help educators around the world enable…
ERIC Educational Resources Information Center
Liu, Chen-Chung; Hong, Yi-Ching
2007-01-01
Although computers and network technology have been widely utilised to assist students learn, few technical supports have been developed to help hearing-impaired students learn in Taiwan. A significant challenge for teachers is to provide after-class learning care and assistance to hearing-impaired students that sustain their motivation to…
ERIC Educational Resources Information Center
Liu, C.-C.; Tao, S.-Y.; Nee, J.-N.
2008-01-01
The internet has been widely used to promote collaborative learning among students. However, students do not always have access to the system, leading to doubt in the interaction among the students, and reducing the effectiveness of collaborative learning, since the web-based collaborative learning environment relies entirely on the availability…
NASA Astrophysics Data System (ADS)
Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro
1995-02-01
We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.
Chinese lexical networks: The structure, function and formation
NASA Astrophysics Data System (ADS)
Li, Jianyu; Zhou, Jie; Luo, Xiaoyue; Yang, Zhanxin
2012-11-01
In this paper Chinese phrases are modeled using complex networks theory. We analyze statistical properties of the networks and find that phrase networks display some important features: not only small world and the power-law distribution, but also hierarchical structure and disassortative mixing. These statistical traits display the global organization of Chinese phrases. The origin and formation of such traits are analyzed from a macroscopic Chinese culture and philosophy perspective. It is interesting to find that Chinese culture and philosophy may shape the formation and structure of Chinese phrases. To uncover the structural design principles of networks, network motif patterns are studied. It is shown that they serve as basic building blocks to form the whole phrase networks, especially triad 38 (feed forward loop) plays a more important role in forming most of the phrases and other motifs. The distinct structure may not only keep the networks stable and robust, but also be helpful for information processing. The results of the paper can give some insight into Chinese language learning and language acquisition. It strengthens the idea that learning the phrases helps to understand Chinese culture. On the other side, understanding Chinese culture and philosophy does help to learn Chinese phrases. The hub nodes in the networks show the close relationship with Chinese culture and philosophy. Learning or teaching the hub characters, hub-linking phrases and phrases which are meaning related based on motif feature should be very useful and important for Chinese learning and acquisition.
Dynamic reconfiguration of human brain functional networks through neurofeedback.
Haller, Sven; Kopel, Rotem; Jhooti, Permi; Haas, Tanja; Scharnowski, Frank; Lovblad, Karl-Olof; Scheffler, Klaus; Van De Ville, Dimitri
2013-11-01
Recent fMRI studies demonstrated that functional connectivity is altered following cognitive tasks (e.g., learning) or due to various neurological disorders. We tested whether real-time fMRI-based neurofeedback can be a tool to voluntarily reconfigure brain network interactions. To disentangle learning-related from regulation-related effects, we first trained participants to voluntarily regulate activity in the auditory cortex (training phase) and subsequently asked participants to exert learned voluntary self-regulation in the absence of feedback (transfer phase without learning). Using independent component analysis (ICA), we found network reconfigurations (increases in functional network connectivity) during the neurofeedback training phase between the auditory target region and (1) the auditory pathway; (2) visual regions related to visual feedback processing; (3) insula related to introspection and self-regulation and (4) working memory and high-level visual attention areas related to cognitive effort. Interestingly, the auditory target region was identified as the hub of the reconfigured functional networks without a-priori assumptions. During the transfer phase, we again found specific functional connectivity reconfiguration between auditory and attention network confirming the specific effect of self-regulation on functional connectivity. Functional connectivity to working memory related networks was no longer altered consistent with the absent demand on working memory. We demonstrate that neurofeedback learning is mediated by widespread changes in functional connectivity. In contrast, applying learned self-regulation involves more limited and specific network changes in an auditory setup intended as a model for tinnitus. Hence, neurofeedback training might be used to promote recovery from neurological disorders that are linked to abnormal patterns of brain connectivity. Copyright © 2013 Elsevier Inc. All rights reserved.
Network mechanisms of intentional learning
Hampshire, Adam; Hellyer, Peter J.; Parkin, Beth; Hiebert, Nole; MacDonald, Penny; Owen, Adrian M.; Leech, Robert; Rowe, James
2016-01-01
The ability to learn new tasks rapidly is a prominent characteristic of human behaviour. This ability relies on flexible cognitive systems that adapt in order to encode temporary programs for processing non-automated tasks. Previous functional imaging studies have revealed distinct roles for the lateral frontal cortices (LFCs) and the ventral striatum in intentional learning processes. However, the human LFCs are complex; they house multiple distinct sub-regions, each of which co-activates with a different functional network. It remains unclear how these LFC networks differ in their functions and how they coordinate with each other, and the ventral striatum, to support intentional learning. Here, we apply a suite of fMRI connectivity methods to determine how LFC networks activate and interact at different stages of two novel tasks, in which arbitrary stimulus-response rules are learnt either from explicit instruction or by trial-and-error. We report that the networks activate en masse and in synchrony when novel rules are being learnt from instruction. However, these networks are not homogeneous in their functions; instead, the directed connectivities between them vary asymmetrically across the learning timecourse and they disengage from the task sequentially along a rostro-caudal axis. Furthermore, when negative feedback indicates the need to switch to alternative stimulus–response rules, there is additional input to the LFC networks from the ventral striatum. These results support the hypotheses that LFC networks interact as a hierarchical system during intentional learning and that signals from the ventral striatum have a driving influence on this system when the internal program for processing the task is updated. PMID:26658925
Orhan, A Emin; Ma, Wei Ji
2017-07-26
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
Bunger, Alicia C; Lengnick-Hall, Rebecca
Collaborative learning models were designed to support quality improvements, such as innovation implementation by promoting communication within organizational teams. Yet the effect of collaborative learning approaches on organizational team communication during implementation is untested. The aim of this study was to explore change in communication patterns within teams from children's mental health organizations during a year-long learning collaborative focused on implementing a new treatment. We adopt a social network perspective to examine intraorganizational communication within each team and assess change in (a) the frequency of communication among team members, (b) communication across organizational hierarchies, and (c) the overall structure of team communication networks. A pretest-posttest design compared communication among 135 participants from 21 organizational teams at the start and end of a learning collaborative. At both time points, participants were asked to list the members of their team and rate the frequency of communication with each along a 7-point Likert scale. Several individual, pair-wise, and team level communication network metrics were calculated and compared over time. At the individual level, participants reported communicating with more team members by the end of the learning collaborative. Cross-hierarchical communication did not change. At the team level, these changes manifested differently depending on team size. In large teams, communication frequency increased, and networks grew denser and slightly less centralized. In small teams, communication frequency declined, growing more sparse and centralized. Results suggest that team communication patterns change minimally but evolve differently depending on size. Learning collaboratives may be more helpful for enhancing communication among larger teams; thus, managers might consider selecting and sending larger staff teams to learning collaboratives. This study highlights key future research directions that can disentangle the relationship between learning collaboratives and team networks.
Thaut, Michael H; Peterson, David A; McIntosh, Gerald C
2005-12-01
In a series of experiments, we have begun to investigate the effect of music as a mnemonic device on learning and memory and the underlying plasticity of oscillatory neural networks. We used verbal learning and memory tests (standardized word lists, AVLT) in conjunction with electroencephalographic analysis to determine differences between verbal learning in either a spoken or musical (verbal materials as song lyrics) modality. In healthy adults, learning in both the spoken and music condition was associated with significant increases in oscillatory synchrony across all frequency bands. A significant difference between the spoken and music condition emerged in the cortical topography of the learning-related synchronization. When using EEG measures as predictors during learning for subsequent successful memory recall, significantly increased coherence (phase-locked synchronization) within and between oscillatory brain networks emerged for music in alpha and gamma bands. In a similar study with multiple sclerosis patients, superior learning and memory was shown in the music condition when controlled for word order recall, and subjects were instructed to sing back the word lists. Also, the music condition was associated with a significant power increase in the low-alpha band in bilateral frontal networks, indicating increased neuronal synchronization. Musical learning may access compensatory pathways for memory functions during compromised PFC functions associated with learning and recall. Music learning may also confer a neurophysiological advantage through the stronger synchronization of the neuronal cell assemblies underlying verbal learning and memory. Collectively our data provide evidence that melodic-rhythmic templates as temporal structures in music may drive internal rhythm formation in recurrent cortical networks involved in learning and memory.
Teacher Networks Companion Piece
ERIC Educational Resources Information Center
Hopkins, Ami Patel; Rulli, Carolyn; Schiff, Daniel; Fradera, Marina
2015-01-01
Network building vitally impacts career development, but in few professions does it impact daily practice more than in teaching. Teacher networks, known as professional learning communities, communities of practice, peer learning circles, virtual professional communities, as well as other names, play a unique and powerful role in education. In…
Network Learning for Educational Change. Professional Learning
ERIC Educational Resources Information Center
Veugelers, Wiel, Ed.; O'Hair, Mary John, Ed.
2005-01-01
School-university networks are becoming an important method to enhance educational renewal and student achievement. Networks go beyond tensions of top-down versus bottom-up, school development and professional development of individuals, theory and practice, and formal and informal organizational structures. The theoretical base of networking…
Learning Universal Computations with Spikes
Thalmeier, Dominik; Uhlmann, Marvin; Kappen, Hilbert J.; Memmesheimer, Raoul-Martin
2016-01-01
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them. PMID:27309381
Bladder cancer treatment response assessment using deep learning in CT with transfer learning
NASA Astrophysics Data System (ADS)
Cha, Kenny H.; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Samala, Ravi K.; Cohan, Richard H.; Caoili, Elaine M.; Paramagul, Chintana; Alva, Ajjai; Weizer, Alon Z.
2017-03-01
We are developing a CAD system for bladder cancer treatment response assessment in CT. We compared the performance of the deep-learning convolution neural network (DL-CNN) using different network sizes, and with and without transfer learning using natural scene images or regions of interest (ROIs) inside and outside the bladder. The DL-CNN was trained to identify responders (T0 disease) and non-responders to chemotherapy. ROIs were extracted from segmented lesions in pre- and post-treatment scans of a patient and paired to generate hybrid pre-post-treatment paired ROIs. The 87 lesions from 82 patients generated 104 temporal lesion pairs and 6,700 pre-post-treatment paired ROIs. Two-fold cross-validation and receiver operating characteristic analysis were performed and the area under the curve (AUC) was calculated for the DL-CNN estimates. The AUCs for prediction of T0 disease after treatment were 0.77+/-0.08 and 0.75+/-0.08, respectively, for the two partitions using DL-CNN without transfer learning and a small network, and were 0.74+/-0.07 and 0.74+/-0.08 with a large network. The AUCs were 0.73+/-0.08 and 0.62+/-0.08 with transfer learning using a small network pre-trained with bladder ROIs. The AUC values were 0.77+/-0.08 and 0.73+/-0.07 using the large network pre-trained with the same bladder ROIs. With transfer learning using the large network pretrained with the Canadian Institute for Advanced Research (CIFAR-10) data set, the AUCs were 0.72+/-0.06 and 0.64+/-0.09, respectively, for the two partitions. None of the differences in the methods reached statistical significance. Our study demonstrated the feasibility of using DL-CNN for the estimation of treatment response in CT. Transfer learning did not improve the treatment response estimation. The DL-CNN performed better when transfer learning with bladder images was used instead of natural scene images.
Prespeech motor learning in a neural network using reinforcement.
Warlaumont, Anne S; Westermann, Gert; Buder, Eugene H; Oller, D Kimbrough
2013-02-01
Vocal motor development in infancy provides a crucial foundation for language development. Some significant early accomplishments include learning to control the process of phonation (the production of sound at the larynx) and learning to produce the sounds of one's language. Previous work has shown that social reinforcement shapes the kinds of vocalizations infants produce. We present a neural network model that provides an account of how vocal learning may be guided by reinforcement. The model consists of a self-organizing map that outputs to muscles of a realistic vocalization synthesizer. Vocalizations are spontaneously produced by the network. If a vocalization meets certain acoustic criteria, it is reinforced, and the weights are updated to make similar muscle activations increasingly likely to recur. We ran simulations of the model under various reinforcement criteria and tested the types of vocalizations it produced after learning in the different conditions. When reinforcement was contingent on the production of phonated (i.e. voiced) sounds, the network's post-learning productions were almost always phonated, whereas when reinforcement was not contingent on phonation, the network's post-learning productions were almost always not phonated. When reinforcement was contingent on both phonation and proximity to English vowels as opposed to Korean vowels, the model's post-learning productions were more likely to resemble the English vowels and vice versa. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yin, Weiwei; Garimalla, Swetha; Moreno, Alberto; Galinski, Mary R; Styczynski, Mark P
2015-08-28
There are increasing efforts to bring high-throughput systems biology techniques to bear on complex animal model systems, often with a goal of learning about underlying regulatory network structures (e.g., gene regulatory networks). However, complex animal model systems typically have significant limitations on cohort sizes, number of samples, and the ability to perform follow-up and validation experiments. These constraints are particularly problematic for many current network learning approaches, which require large numbers of samples and may predict many more regulatory relationships than actually exist. Here, we test the idea that by leveraging the accuracy and efficiency of classifiers, we can construct high-quality networks that capture important interactions between variables in datasets with few samples. We start from a previously-developed tree-like Bayesian classifier and generalize its network learning approach to allow for arbitrary depth and complexity of tree-like networks. Using four diverse sample networks, we demonstrate that this approach performs consistently better at low sample sizes than the Sparse Candidate Algorithm, a representative approach for comparison because it is known to generate Bayesian networks with high positive predictive value. We develop and demonstrate a resampling-based approach to enable the identification of a viable root for the learned tree-like network, important for cases where the root of a network is not known a priori. We also develop and demonstrate an integrated resampling-based approach to the reduction of variable space for the learning of the network. Finally, we demonstrate the utility of this approach via the analysis of a transcriptional dataset of a malaria challenge in a non-human primate model system, Macaca mulatta, suggesting the potential to capture indicators of the earliest stages of cellular differentiation during leukopoiesis. We demonstrate that by starting from effective and efficient approaches for creating classifiers, we can identify interesting tree-like network structures with significant ability to capture the relationships in the training data. This approach represents a promising strategy for inferring networks with high positive predictive value under the constraint of small numbers of samples, meeting a need that will only continue to grow as more high-throughput studies are applied to complex model systems.
ERIC Educational Resources Information Center
Turcato, Carolina; Barin-Cruz, Luciano; Pedrozo, Eugenio Avila
2012-01-01
Purpose: This study aims to investigate how an organic cotton production network learns to maintain its hybrid network and its sustainability in the face of internal and external pressures. Design/methodology/approach: A qualitative case study was conducted in Justa Trama, a Brazilian-based organic cotton production network formed by six members…
ERIC Educational Resources Information Center
Lin, Xiaofan; Hu, Xiaoyong; Hu, Qintai; Liu, Zhichun
2016-01-01
Analysing the structure of a social network can help us understand the key factors influencing interaction and collaboration in a virtual learning community (VLC). Here, we describe the mechanisms used in social network analysis (SNA) to analyse the social network structure of a VLC for teachers and discuss the relationship between face-to-face…
Active learning of cortical connectivity from two-photon imaging data.
Bertrán, Martín A; Martínez, Natalia L; Wang, Ye; Dunson, David; Sapiro, Guillermo; Ringach, Dario
2018-01-01
Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer sparse connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this "active learning" method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real data. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model.
Deinterlacing using modular neural network
NASA Astrophysics Data System (ADS)
Woo, Dong H.; Eom, Il K.; Kim, Yoo S.
2004-05-01
Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.
Li, Xin; Gray, Kathleen; Verspoor, Karin; Barnett, Stephen
2017-01-01
Online social networks (OSN) enable health professionals to learn informally, for example by sharing medical knowledge, or discussing practice management challenges and clinical issues. Understanding the learning context in OSN is necessary to get a complete picture of the learning process, in order to better support this type of learning. This study proposes critical contextual factors for understanding the learning context in OSN for health professionals, and demonstrates how these contextual factors can be used to analyse the learning context in a designated online learning environment for health professionals.
Axelsson, Robert; Angelstam, Per; Myhrman, Lennart; Sädbom, Stefan; Ivarsson, Milis; Elbakidze, Marine; Andersson, Kenneth; Cupa, Petr; Diry, Christian; Doyon, Frederic; Drotz, Marcus K; Hjorth, Arne; Hermansson, Jan Olof; Kullberg, Thomas; Lickers, F Henry; McTaggart, Johanna; Olsson, Anders; Pautov, Yurij; Svensson, Lennart; Törnblom, Johan
2013-03-01
To implement policies about sustainable landscapes and rural development necessitates social learning about states and trends of sustainability indicators, norms that define sustainability, and adaptive multi-level governance. We evaluate the extent to which social learning at multiple governance levels for sustainable landscapes occur in 18 local development initiatives in the network of Sustainable Bergslagen in Sweden. We mapped activities over time, and interviewed key actors in the network about social learning. While activities resulted in exchange of experiences and some local solutions, a major challenge was to secure systematic social learning and make new knowledge explicit at multiple levels. None of the development initiatives used a systematic approach to secure social learning, and sustainability assessments were not made systematically. We discuss how social learning can be improved, and how a learning network of development initiatives could be realized.
A novel time series link prediction method: Learning automata approach
NASA Astrophysics Data System (ADS)
Moradabadi, Behnaz; Meybodi, Mohammad Reza
2017-09-01
Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.
NASA Astrophysics Data System (ADS)
Marshall, Jonathan A.
1992-12-01
A simple self-organizing neural network model, called an EXIN network, that learns to process sensory information in a context-sensitive manner, is described. EXIN networks develop efficient representation structures for higher-level visual tasks such as segmentation, grouping, transparency, depth perception, and size perception. Exposure to a perceptual environment during a developmental period serves to configure the network to perform appropriate organization of sensory data. A new anti-Hebbian inhibitory learning rule permits superposition of multiple simultaneous neural activations (multiple winners), while maintaining contextual consistency constraints, instead of forcing winner-take-all pattern classifications. The activations can represent multiple patterns simultaneously and can represent uncertainty. The network performs parallel parsing, credit attribution, and simultaneous constraint satisfaction. EXIN networks can learn to represent multiple oriented edges even where they intersect and can learn to represent multiple transparently overlaid surfaces defined by stereo or motion cues. In the case of stereo transparency, the inhibitory learning implements both a uniqueness constraint and permits coactivation of cells representing multiple disparities at the same image location. Thus two or more disparities can be active simultaneously without interference. This behavior is analogous to that of Prazdny's stereo vision algorithm, with the bonus that each binocular point is assigned a unique disparity. In a large implementation, such a NN would also be able to represent effectively the disparities of a cloud of points at random depths, like human observers, and unlike Prazdny's method
Dynamic functional connectivity shapes individual differences in associative learning.
Fatima, Zainab; Kovacevic, Natasha; Misic, Bratislav; McIntosh, Anthony Randal
2016-11-01
Current neuroscientific research has shown that the brain reconfigures its functional interactions at multiple timescales. Here, we sought to link transient changes in functional brain networks to individual differences in behavioral and cognitive performance by using an active learning paradigm. Participants learned associations between pairs of unrelated visual stimuli by using feedback. Interindividual behavioral variability was quantified with a learning rate measure. By using a multivariate statistical framework (partial least squares), we identified patterns of network organization across multiple temporal scales (within a trial, millisecond; across a learning session, minute) and linked these to the rate of change in behavioral performance (fast and slow). Results indicated that posterior network connectivity was present early in the trial for fast, and later in the trial for slow performers. In contrast, connectivity in an associative memory network (frontal, striatal, and medial temporal regions) occurred later in the trial for fast, and earlier for slow performers. Time-dependent changes in the posterior network were correlated with visual/spatial scores obtained from independent neuropsychological assessments, with fast learners performing better on visual/spatial subtests. No relationship was found between functional connectivity dynamics in the memory network and visual/spatial test scores indicative of cognitive skill. By using a comprehensive set of measures (behavioral, cognitive, and neurophysiological), we report that individual variations in learning-related performance change are supported by differences in cognitive ability and time-sensitive connectivity in functional neural networks. Hum Brain Mapp 37:3911-3928, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C
2016-11-01
Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Spiking neuron network Helmholtz machine.
Sountsov, Pavel; Miller, Paul
2015-01-01
An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.
Spiking neuron network Helmholtz machine
Sountsov, Pavel; Miller, Paul
2015-01-01
An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. PMID:25954191
Networked Learning: Design Considerations for Online Instructors
ERIC Educational Resources Information Center
Czerkawski, Betul C.
2016-01-01
The considerable increase in web-based knowledge networks in the past two decades is strongly influencing learning environments. Learning entails information retrieval, use, communication, and production, and is strongly enriched by socially mediated discussions, debates, and collaborative activities. It is becoming critical for educators to…
Wu, Ting-Ting
2014-06-01
Virtual communities provide numerous resources, immediate feedback, and information sharing, enabling people to rapidly acquire information and knowledge and supporting diverse applications that facilitate interpersonal interactions, communication, and sharing. Moreover, incorporating highly mobile and convenient devices into practice-based courses can be advantageous in learning situations. Therefore, in this study, a tablet PC and Google+ were introduced to a health education practice course to elucidate satisfaction of learning module and conditions and analyze the sequence and frequency of learning behaviors during the social-network-based learning process. According to the analytical results, social networks can improve interaction among peers and between educators and students, particularly when these networks are used to search for data, post articles, engage in discussions, and communicate. In addition, most nursing students and nursing educators expressed a positive attitude and satisfaction toward these innovative teaching methods, and looked forward to continuing the use of this learning approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
Impact of censoring on learning Bayesian networks in survival modelling.
Stajduhar, Ivan; Dalbelo-Basić, Bojana; Bogunović, Nikola
2009-11-01
Bayesian networks are commonly used for presenting uncertainty and covariate interactions in an easily interpretable way. Because of their efficient inference and ability to represent causal relationships, they are an excellent choice for medical decision support systems in diagnosis, treatment, and prognosis. Although good procedures for learning Bayesian networks from data have been defined, their performance in learning from censored survival data has not been widely studied. In this paper, we explore how to use these procedures to learn about possible interactions between prognostic factors and their influence on the variate of interest. We study how censoring affects the probability of learning correct Bayesian network structures. Additionally, we analyse the potential usefulness of the learnt models for predicting the time-independent probability of an event of interest. We analysed the influence of censoring with a simulation on synthetic data sampled from randomly generated Bayesian networks. We used two well-known methods for learning Bayesian networks from data: a constraint-based method and a score-based method. We compared the performance of each method under different levels of censoring to those of the naive Bayes classifier and the proportional hazards model. We did additional experiments on several datasets from real-world medical domains. The machine-learning methods treated censored cases in the data as event-free. We report and compare results for several commonly used model evaluation metrics. On average, the proportional hazards method outperformed other methods in most censoring setups. As part of the simulation study, we also analysed structural similarities of the learnt networks. Heavy censoring, as opposed to no censoring, produces up to a 5% surplus and up to 10% missing total arcs. It also produces up to 50% missing arcs that should originally be connected to the variate of interest. Presented methods for learning Bayesian networks from data can be used to learn from censored survival data in the presence of light censoring (up to 20%) by treating censored cases as event-free. Given intermediate or heavy censoring, the learnt models become tuned to the majority class and would thus require a different approach.
Multi-layer network utilizing rewarded spike time dependent plasticity to learn a foraging task
2017-01-01
Neural networks with a single plastic layer employing reward modulated spike time dependent plasticity (STDP) are capable of learning simple foraging tasks. Here we demonstrate advanced pattern discrimination and continuous learning in a network of spiking neurons with multiple plastic layers. The network utilized both reward modulated and non-reward modulated STDP and implemented multiple mechanisms for homeostatic regulation of synaptic efficacy, including heterosynaptic plasticity, gain control, output balancing, activity normalization of rewarded STDP and hard limits on synaptic strength. We found that addition of a hidden layer of neurons employing non-rewarded STDP created neurons that responded to the specific combinations of inputs and thus performed basic classification of the input patterns. When combined with a following layer of neurons implementing rewarded STDP, the network was able to learn, despite the absence of labeled training data, discrimination between rewarding patterns and the patterns designated as punishing. Synaptic noise allowed for trial-and-error learning that helped to identify the goal-oriented strategies which were effective in task solving. The study predicts a critical set of properties of the spiking neuronal network with STDP that was sufficient to solve a complex foraging task involving pattern classification and decision making. PMID:28961245
Bridging Cognitive And Neural Aspects Of Classroom Learning
NASA Astrophysics Data System (ADS)
Posner, Michael I.
2009-11-01
A major achievement of the first twenty years of neuroimaging is to reveal the brain networks that underlie fundamental aspects of attention, memory and expertise. We examine some principles underlying the activation of these networks. These networks represent key constraints for the design of teaching. Individual differences in these networks reflect a combination of genes and experiences. While acquiring expertise is easier for some than others the importance of effort in its acquisition is a basic principle. Networks are strengthened through exercise, but maintaining interest that produces sustained attention is key to making exercises successful. The state of the brain prior to learning may also represent an important constraint on successful learning and some interventions designed to investigate the role of attention state in learning are discussed. Teaching remains a creative act between instructor and student, but an understanding of brain mechanisms might improve opportunity for success for both participants.
Investigating Student Communities with Network Analysis of Interactions in a Physics Learning Center
NASA Astrophysics Data System (ADS)
Brewe, Eric; Kramer, Laird; O'Brien, George
2009-11-01
We describe our initial efforts at implementing social network analysis to visualize and quantify student interactions in Florida International University's Physics Learning Center. Developing a sense of community among students is one of the three pillars of an overall reform effort to increase participation in physics, and the sciences more broadly, at FIU. Our implementation of a research and learning community, embedded within a course reform effort, has led to increased recruitment and retention of physics majors. Finn and Rock [1997] link the academic and social integration of students to increased rates of retention. To identify these interactions, we have initiated an investigation that utilizes social network analysis to identify primary community participants. Community interactions are then characterized through the network's density and connectivity, shedding light on learning communities and participation. Preliminary results, further research questions, and future directions utilizing social network analysis are presented.
A review of active learning approaches to experimental design for uncovering biological networks
2017-01-01
Various types of biological knowledge describe networks of interactions among elementary entities. For example, transcriptional regulatory networks consist of interactions among proteins and genes. Current knowledge about the exact structure of such networks is highly incomplete, and laboratory experiments that manipulate the entities involved are conducted to test hypotheses about these networks. In recent years, various automated approaches to experiment selection have been proposed. Many of these approaches can be characterized as active machine learning algorithms. Active learning is an iterative process in which a model is learned from data, hypotheses are generated from the model to propose informative experiments, and the experiments yield new data that is used to update the model. This review describes the various models, experiment selection strategies, validation techniques, and successful applications described in the literature; highlights common themes and notable distinctions among methods; and identifies likely directions of future research and open problems in the area. PMID:28570593
Modi, Mehrab N; Dhawale, Ashesh K; Bhalla, Upinder S
2014-01-01
Animals can learn causal relationships between pairs of stimuli separated in time and this ability depends on the hippocampus. Such learning is believed to emerge from alterations in network connectivity, but large-scale connectivity is difficult to measure directly, especially during learning. Here, we show that area CA1 cells converge to time-locked firing sequences that bridge the two stimuli paired during training, and this phenomenon is coupled to a reorganization of network correlations. Using two-photon calcium imaging of mouse hippocampal neurons we find that co-time-tuned neurons exhibit enhanced spontaneous activity correlations that increase just prior to learning. While time-tuned cells are not spatially organized, spontaneously correlated cells do fall into distinct spatial clusters that change as a result of learning. We propose that the spatial re-organization of correlation clusters reflects global network connectivity changes that are responsible for the emergence of the sequentially-timed activity of cell-groups underlying the learned behavior. DOI: http://dx.doi.org/10.7554/eLife.01982.001 PMID:24668171
NASA Astrophysics Data System (ADS)
Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia
2018-03-01
Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.
Zenke, Friedemann; Ganguli, Surya
2018-06-01
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-04-10
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-01-01
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270
Machine Learning Topological Invariants with Neural Networks
NASA Astrophysics Data System (ADS)
Zhang, Pengfei; Shen, Huitao; Zhai, Hui
2018-02-01
In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.
Local Area Networks and the Learning Lab of the Future.
ERIC Educational Resources Information Center
Ebersole, Dennis C.
1987-01-01
Considers educational applications of local area computer networks and discusses industry standards for design established by the International Standards Organization (ISO) and Institute of Electrical and Electronic Engineers (IEEE). A futuristic view of a learning laboratory using a local area network is presented. (Author/LRW)
Networked Teaching and Learning.
ERIC Educational Resources Information Center
Benson, Chris, Ed.
2002-01-01
This theme issue on networked teaching and learning contains 11 articles written by teachers of English and language arts in Bread Loaf's primarily rural, teacher networks. Most of these narratives describe how teachers have taught writing and literature using online exchanges or teleconferencing involving students in different locations and grade…
Experiences of Pioneers Facilitating Teacher Networks for Professional Development
ERIC Educational Resources Information Center
Hanraets, Irene; Hulsebosch, Joitske; de Laat, Maarten
2011-01-01
This study presents an exploration into facilitation practices of teacher professional development networks. Stimulating networked learning amongst teachers is a powerful way of creating an informal practice-based learning space driven by teacher needs. As such, it presents an additional channel (besides more formal traditional professional…
Gerraty, Raphael T.; Davidow, Juliet Y.; Wimmer, G. Elliott; Kahn, Itamar
2014-01-01
An important aspect of adaptive learning is the ability to flexibly use past experiences to guide new decisions. When facing a new decision, some people automatically leverage previously learned associations, while others do not. This variability in transfer of learning across individuals has been demonstrated repeatedly and has important implications for understanding adaptive behavior, yet the source of these individual differences remains poorly understood. In particular, it is unknown why such variability in transfer emerges even among homogeneous groups of young healthy participants who do not vary on other learning-related measures. Here we hypothesized that individual differences in the transfer of learning could be related to relatively stable differences in intrinsic brain connectivity, which could constrain how individuals learn. To test this, we obtained a behavioral measure of memory-based transfer outside of the scanner and on a separate day acquired resting-state functional MRI images in 42 participants. We then analyzed connectivity across independent component analysis-derived brain networks during rest, and tested whether intrinsic connectivity in learning-related networks was associated with transfer. We found that individual differences in transfer were related to intrinsic connectivity between the hippocampus and the ventromedial prefrontal cortex, and between these regions and large-scale functional brain networks. Together, the findings demonstrate a novel role for intrinsic brain dynamics in flexible learning-guided behavior, both within a set of functionally specific regions known to be important for learning, as well as between these regions and the default and frontoparietal networks, which are thought to serve more general cognitive functions. PMID:25143610
Korostil, Michele; Remington, Gary; McIntosh, Anthony Randal
2016-01-01
Understanding how practice mediates the transition of brain-behavior networks between early and later stages of learning is constrained by the common approach to analysis of fMRI data. Prior imaging studies have mostly relied on a single scan, and parametric, task-related analyses. Our experiment incorporates a multisession fMRI lexicon-learning experiment with multivariate, whole-brain analysis to further knowledge of the distributed networks supporting practice-related learning in schizophrenia (SZ). Participants with SZ were compared with healthy control (HC) participants as they learned a novel lexicon during two fMRI scans over a several day period. All participants were trained to equal task proficiency prior to scanning. Behavioral-Partial Least Squares, a multivariate analytic approach, was used to analyze the imaging data. Permutation testing was used to determine statistical significance and bootstrap resampling to determine the reliability of the findings. With practice, HC participants transitioned to a brain-accuracy network incorporating dorsostriatal regions in late-learning stages. The SZ participants did not transition to this pattern despite comparable behavioral results. Instead, successful learners with SZ were differentiated primarily on the basis of greater engagement of perceptual and perceptual-integration brain regions. There is a different spatiotemporal unfolding of brain-learning relationships in SZ. In SZ, given the same amount of practice, the movement from networks suggestive of effortful learning toward subcortically driven procedural one differs from HC participants. Learning performance in SZ is driven by varying levels of engagement in perceptual regions, which suggests perception itself is impaired and may impact downstream, "higher level" cognition.
NASA Astrophysics Data System (ADS)
Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.
2017-08-01
The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.
Stochastic competitive learning in complex networks.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning..
Learning and optimization with cascaded VLSI neural network building-block chips
NASA Technical Reports Server (NTRS)
Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.
1992-01-01
To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.
Deep learning of orthographic representations in baboons.
Hannagan, Thomas; Ziegler, Johannes C; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan
2014-01-01
What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process.
Distributed synaptic weights in a LIF neural network and learning rules
NASA Astrophysics Data System (ADS)
Perthame, Benoît; Salort, Delphine; Wainrib, Gilles
2017-09-01
Leaky integrate-and-fire (LIF) models are mean-field limits, with a large number of neurons, used to describe neural networks. We consider inhomogeneous networks structured by a connectivity parameter (strengths of the synaptic weights) with the effect of processing the input current with different intensities. We first study the properties of the network activity depending on the distribution of synaptic weights and in particular its discrimination capacity. Then, we consider simple learning rules and determine the synaptic weight distribution it generates. We outline the role of noise as a selection principle and the capacity to memorize a learned signal.
A proposal of fuzzy connective with learning function and its application to fuzzy retrieval system
NASA Technical Reports Server (NTRS)
Hayashi, Isao; Naito, Eiichi; Ozawa, Jun; Wakami, Noboru
1993-01-01
A new fuzzy connective and a structure of network constructed by fuzzy connectives are proposed to overcome a drawback of conventional fuzzy retrieval systems. This network represents a retrieval query and the fuzzy connectives in networks have a learning function to adjust its parameters by data from a database and outputs of a user. The fuzzy retrieval systems employing this network are also constructed. Users can retrieve results even with a query whose attributes do not exist in a database schema and can get satisfactory results for variety of thinkings by learning function.
Privacy-preserving backpropagation neural network learning.
Chen, Tingting; Zhong, Sheng
2009-10-01
With the development of distributed computing environment , many learning problems now have to deal with distributed input data. To enhance cooperations in learning, it is important to address the privacy concern of each data holder by extending the privacy preservation notion to original learning algorithms. In this paper, we focus on preserving the privacy in an important learning model, multilayer neural networks. We present a privacy-preserving two-party distributed algorithm of backpropagation which allows a neural network to be trained without requiring either party to reveal her data to the other. We provide complete correctness and security analysis of our algorithms. The effectiveness of our algorithms is verified by experiments on various real world data sets.
An architecture for designing fuzzy logic controllers using neural networks
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
Described here is an architecture for designing fuzzy controllers through a hierarchical process of control rule acquisition and by using special classes of neural network learning techniques. A new method for learning to refine a fuzzy logic controller is introduced. A reinforcement learning technique is used in conjunction with a multi-layer neural network model of a fuzzy controller. The model learns by updating its prediction of the plant's behavior and is related to the Sutton's Temporal Difference (TD) method. The method proposed here has the advantage of using the control knowledge of an experienced operator and fine-tuning it through the process of learning. The approach is applied to a cart-pole balancing system.
ERIC Educational Resources Information Center
O'Brien, Mark; Atkinson, Amanda; Burton, Diana; Campbell, Anne; Qualter, Anne; Varga-Atkins, Tunde
2009-01-01
This article has been produced from the work of a research project conducted in the context of a city-wide education service in the United Kingdom. This was the Liverpool Learning Networks Research Project, which began in July 2005. The researchers carried out semi-structured interviews with education practitioners--learning network…
Let's Face(book) It: Analyzing Interactions in Social Network Groups for Chemistry Learning
ERIC Educational Resources Information Center
Rap, Shelley; Blonder, Ron
2016-01-01
We examined how social network (SN) groups contribute to the learning of chemistry. The main goal was to determine whether chemistry learning could occur in the group discourse. The emphasis was on groups of students in the 11th and 12th grades who learn chemistry in preparation for their final external examination. A total of 1118 discourse…
ERIC Educational Resources Information Center
Drexler, Wendy
2010-01-01
The purpose of this design-based research case study was to apply a networked learning approach to a seventh grade science class at a public school in the southeastern United States. Students adapted Web applications to construct personal learning environments for in-depth scientific inquiry of poisonous and venomous life forms. API widgets were…
Analysis of the “naming game” with learning errors in communications
NASA Astrophysics Data System (ADS)
Lou, Yang; Chen, Guanrong
2015-07-01
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Informal Learning and Identity Formation in Online Social Networks
ERIC Educational Resources Information Center
Greenhow, Christine; Robelia, Beth
2009-01-01
All students today are increasingly expected to develop technological fluency, digital citizenship, and other twenty-first century competencies despite wide variability in the quality of learning opportunities schools provide. Social network sites (SNSs) available via the internet may provide promising contexts for learning to supplement…
What Drives Nurses' Blended e-Learning Continuance Intention?
ERIC Educational Resources Information Center
Cheng, Yung-Ming
2014-01-01
This study's purpose was to synthesize the user network (including subjective norm and network externality), task-technology fit (TTF), and expectation-confirmation model (ECM) to explain nurses' intention to continue using the blended electronic learning (e-learning) system within medical institutions. A total of 450 questionnaires were…
Stojanova, Daniela; Ceci, Michelangelo; Malerba, Donato; Dzeroski, Saso
2013-09-26
Ontologies and catalogs of gene functions, such as the Gene Ontology (GO) and MIPS-FUN, assume that functional classes are organized hierarchically, that is, general functions include more specific ones. This has recently motivated the development of several machine learning algorithms for gene function prediction that leverages on this hierarchical organization where instances may belong to multiple classes. In addition, it is possible to exploit relationships among examples, since it is plausible that related genes tend to share functional annotations. Although these relationships have been identified and extensively studied in the area of protein-protein interaction (PPI) networks, they have not received much attention in hierarchical and multi-class gene function prediction. Relations between genes introduce autocorrelation in functional annotations and violate the assumption that instances are independently and identically distributed (i.i.d.), which underlines most machine learning algorithms. Although the explicit consideration of these relations brings additional complexity to the learning process, we expect substantial benefits in predictive accuracy of learned classifiers. This article demonstrates the benefits (in terms of predictive accuracy) of considering autocorrelation in multi-class gene function prediction. We develop a tree-based algorithm for considering network autocorrelation in the setting of Hierarchical Multi-label Classification (HMC). We empirically evaluate the proposed algorithm, called NHMC (Network Hierarchical Multi-label Classification), on 12 yeast datasets using each of the MIPS-FUN and GO annotation schemes and exploiting 2 different PPI networks. The results clearly show that taking autocorrelation into account improves the predictive performance of the learned models for predicting gene function. Our newly developed method for HMC takes into account network information in the learning phase: When used for gene function prediction in the context of PPI networks, the explicit consideration of network autocorrelation increases the predictive performance of the learned models. Overall, we found that this holds for different gene features/ descriptions, functional annotation schemes, and PPI networks: Best results are achieved when the PPI network is dense and contains a large proportion of function-relevant interactions.
The CIRTL Network: A Professional Development Network for Future STEM Faculty
NASA Astrophysics Data System (ADS)
Herbert, B. E.
2011-12-01
The Center for the Integration of Research, Teaching, and Learning (CIRTL) is an NSF Center for Learning and Teaching in higher education using the professional development of graduate students and post-doctoral scholars as the leverage point to develop a national STEM faculty committed to implementing and advancing effective teaching practices for diverse student audiences as part of successful professional careers. The goal of CIRTL is to improve the STEM learning of all students at every college and university, and thereby to increase the diversity in STEM fields and the STEM literacy of the nation. The CIRTL network seeks to support change at a number of levels to support its goals: individual, classroom, institutional, and national. To bring about change, which is never easy, the CIRTL network has developed a conceptual model or change model that is thought to support the program objectives. Three central concepts, Teaching-as-Research, Learning Communities, and Learning-through-Diversity, underlie the design of all CIRTL activities. STEM faculty use research methods to systematically and reflectively improve learning outcomes. This work is done within a community of shared learning and discovery, and explicitly recognizes that effective teaching capitalizes on the rich array of experiences, backgrounds, and skills among the students and instructors to enhance the learning of all. This model is being refined and tested through a networked-design experiment, where the model is tested in diverse settings. Established in fall 2006, the CIRTL Network comprises the University of Colorado at Boulder (CU), Howard University, Michigan State University, Texas A&M University, Vanderbilt University, and the University of Wisconsin-Madison. The diversity of these institutions is by design: private/public; large/moderate size; majority-/minority-serving; geographic location. This talk will describe the theoretical constructs and efficacy of Teaching-as Research as a central design element of the CIRTL network model. Teaching-as-Research involves the deliberate, systematic, and reflective use of research methods to develop and implement teaching practices that advance the learning experiences and outcomes of students. CIRTL envision three types of learning outcomes for CIRTL participants: CIRTL Fellow, CIRTL Practitioner, and CIRTL Scholar. These three, tiered learning outcomes recognize the role of the CIRTL pillars in effective teaching (Fellow), scholarly teaching that builds on the CIRTL pillars to demonstrably improve learning and make the results public (Practitioner), and finally scholarship that advances teaching and learning under peer review (Scholar). CIRTL program outcomes conceived in this way permit anyone to enter the CIRTL Network learning community from a wide variety of disciplines, needs, and past experiences, and to achieve success as an instructor in diverse contexts.
Resting-state low-frequency fluctuations reflect individual differences in spoken language learning.
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C M
2016-03-01
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The "competition" (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest--ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success. Copyright © 2015 Elsevier Ltd. All rights reserved.
Resting-state low-frequency fluctuations reflect individual differences in spoken language learning
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping; Wong, Patrick C.M.
2016-01-01
A major challenge in language learning studies is to identify objective, pre-training predictors of success. Variation in the low-frequency fluctuations (LFFs) of spontaneous brain activity measured by resting-state functional magnetic resonance imaging (RS-fMRI) has been found to reflect individual differences in cognitive measures. In the present study, we aimed to investigate the extent to which initial spontaneous brain activity is related to individual differences in spoken language learning. We acquired RS-fMRI data and subsequently trained participants on a sound-to-word learning paradigm in which they learned to use foreign pitch patterns (from Mandarin Chinese) to signal word meaning. We performed amplitude of spontaneous low-frequency fluctuation (ALFF) analysis, graph theory-based analysis, and independent component analysis (ICA) to identify functional components of the LFFs in the resting-state. First, we examined the ALFF as a regional measure and showed that regional ALFFs in the left superior temporal gyrus were positively correlated with learning performance, whereas ALFFs in the default mode network (DMN) regions were negatively correlated with learning performance. Furthermore, the graph theory-based analysis indicated that the degree and local efficiency of the left superior temporal gyrus were positively correlated with learning performance. Finally, the default mode network and several task-positive resting-state networks (RSNs) were identified via the ICA. The “competition” (i.e., negative correlation) between the DMN and the dorsal attention network was negatively correlated with learning performance. Our results demonstrate that a) spontaneous brain activity can predict future language learning outcome without prior hypotheses (e.g., selection of regions of interest – ROIs) and b) both regional dynamics and network-level interactions in the resting brain can account for individual differences in future spoken language learning success. PMID:26866283
Synchronized Pair Configuration in Virtualization-Based Lab for Learning Computer Networks
ERIC Educational Resources Information Center
Kongcharoen, Chaknarin; Hwang, Wu-Yuin; Ghinea, Gheorghita
2017-01-01
More studies are concentrating on using virtualization-based labs to facilitate computer or network learning concepts. Some benefits are lower hardware costs and greater flexibility in reconfiguring computer and network environments. However, few studies have investigated effective mechanisms for using virtualization fully for collaboration.…
Viable Global Networked Learning. JSRI Occasional Paper No. 23. Latino Studies Series.
ERIC Educational Resources Information Center
Arias, Armando A., Jr.
This paper discusses an innovative paradigm for looking at computer mediated/networked teaching, learning, and research known as BESTNET (Binational English and Spanish Telecommunications Network). BESTNET is functionally defined as an international community of universities and institutions linked by common educational goals and processes,…
Neural network applications in telecommunications
NASA Technical Reports Server (NTRS)
Alspector, Joshua
1994-01-01
Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.
Networked Environments that Create Hybrid Spaces for Learning Science
ERIC Educational Resources Information Center
Otrel-Cass, Kathrin; Khoo, Elaine; Cowie, Bronwen
2014-01-01
Networked learning environments that embed the essence of the Community of Inquiry (CoI) framework utilise pedagogies that encourage dialogic practices. This can be of significance for classroom teaching across all curriculum areas. In science education, networked environments are thought to support student investigations of scientific problems,…
The Role of the Australian Open Learning Information Network.
ERIC Educational Resources Information Center
Bishop, Robin; And Others
Three documents are presented which describe the Australian Open Learning Information Network (AOLIN)--a national, independent, and self-supporting network of educational researchers with a common interest in the use of information technology for open and distance education--and discuss two evaluative studies undertaken by the organization. The…
Learning, memory, and the role of neural network architecture.
Hermundstad, Ann M; Brown, Kevin S; Bassett, Danielle S; Carlson, Jean M
2011-06-01
The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.
Frame prediction using recurrent convolutional encoder with residual learning
NASA Astrophysics Data System (ADS)
Yue, Boxuan; Liang, Jun
2018-05-01
The prediction for the frame of a video is difficult but in urgent need in auto-driving. Conventional methods can only predict some abstract trends of the region of interest. The boom of deep learning makes the prediction for frames possible. In this paper, we propose a novel recurrent convolutional encoder and DE convolutional decoder structure to predict frames. We introduce the residual learning in the convolution encoder structure to solve the gradient issues. The residual learning can transform the gradient back propagation to an identity mapping. It can reserve the whole gradient information and overcome the gradient issues in Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN). Besides, compared with the branches in CNNs and the gated structures in RNNs, the residual learning can save the training time significantly. In the experiments, we use UCF101 dataset to train our networks, the predictions are compared with some state-of-the-art methods. The results show that our networks can predict frames fast and efficiently. Furthermore, our networks are used for the driving video to verify the practicability.
NASA Astrophysics Data System (ADS)
Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan
2018-02-01
Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.
Social Networks and Performance in Distributed Learning Communities
ERIC Educational Resources Information Center
Cadima, Rita; Ojeda, Jordi; Monguet, Josep M.
2012-01-01
Social networks play an essential role in learning environments as a key channel for knowledge sharing and students' support. In distributed learning communities, knowledge sharing does not occur as spontaneously as when a working group shares the same physical space; knowledge sharing depends even more on student informal connections. In this…
ERIC Educational Resources Information Center
Lee, Jun-Ki; Kwon, Yongju
2012-01-01
Fourteen science high school students participated in this study, which investigated neural-network plasticity associated with hypothesis-generating and hypothesis-understanding in learning. The students were divided into two groups and participated in either hypothesis-generating or hypothesis-understanding type learning programs, which were…
Immersive Educational Technology: Changing Families and Learning.
ERIC Educational Resources Information Center
Ehrich, Roger W.; McCreary, Faith
Since the popularization of networked computing that began in 1993, many excited educators have employed networked computers to improve motivation and learning in the classroom. Computers have also become a focal point for the improvement of instruction through the introduction of teaching methods that better support constructivist learning. While…
Connectionist Learning Procedures.
ERIC Educational Resources Information Center
Hinton, Geoffrey E.
A major goal of research on networks of neuron-like processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the…
Systemwide Implementation of Project-Based Learning: The Philadelphia Approach
ERIC Educational Resources Information Center
Schwalm, Jason; Tylek, Karen Smuck
2012-01-01
Citywide implementation of project-based learning highlights the benefits--and the challenges--of promoting exemplary practices across an entire out-of-school time (OST) network. In summer 2009, the City of Philadelphia and its intermediary, the Public Health Management Corporation (PHMC), introduced project-based learning to a network of more…
Get Networked and Spy Your Languages
ERIC Educational Resources Information Center
Rico, Mercedes; Ferreira, Paula; Dominguez, Eva M.; Coppens, Julian
2012-01-01
Our proposal describes ISPY, a multilateral European K2 language project based on the development of an Online Networking Platform for Language Learning (http://www.ispy-project.com/). Supported by the Lifelong Learning European Programme, the platform aims to help young adults across Europe, secondary and vocational school programs, learn a new…
Network Analysis of a Virtual Community of Learning of Economics Educators
ERIC Educational Resources Information Center
Fontainha, Elsa; Martins, Jorge Tiago; Vasconcelos, Ana Cristina
2015-01-01
Introduction: This paper aims at understanding virtual communities of learning in terms of dynamics, types of knowledge shared by participants, and network characteristics such as size, relationships, density, and centrality of participants. It looks at the relationships between these aspects and the evolution of communities of learning. It…
Late Departures from Paper-Based to Supported Networked Learning in South Africa: Lessons Learned
ERIC Educational Resources Information Center
Kok, Illasha; Beter, Petra; Esterhuizen, Hennie
2018-01-01
Fragmented connectivity in South Africa is the dominant barrier for digitising initiatives. New insights surfaced when a university-based nursing programme introduced tablets within a supportive network learning environment. A qualitative, explorative design investigated adult nurses' experiences of the realities when moving from paper-based…
Puzzles in modern biology. V. Why are genomes overwired?
Frank, Steven A
2017-01-01
Many factors affect eukaryotic gene expression. Transcription factors, histone codes, DNA folding, and noncoding RNA modulate expression. Those factors interact in large, broadly connected regulatory control networks. An engineer following classical principles of control theory would design a simpler regulatory network. Why are genomes overwired? Neutrality or enhanced robustness may lead to the accumulation of additional factors that complicate network architecture. Dynamics progresses like a ratchet. New factors get added. Genomes adapt to the additional complexity. The newly added factors can no longer be removed without significant loss of fitness. Alternatively, highly wired genomes may be more malleable. In large networks, most genomic variants tend to have a relatively small effect on gene expression and trait values. Many small effects lead to a smooth gradient, in which traits may change steadily with respect to underlying regulatory changes. A smooth gradient may provide a continuous path from a starting point up to the highest peak of performance. A potential path of increasing performance promotes adaptability and learning. Genomes gain by the inductive process of natural selection, a trial and error learning algorithm that discovers general solutions for adapting to environmental challenge. Similarly, deeply and densely connected computational networks gain by various inductive trial and error learning procedures, in which the networks learn to reduce the errors in sequential trials. Overwiring alters the geometry of induction by smoothing the gradient along the inductive pathways of improving performance. Those overwiring benefits for induction apply to both natural biological networks and artificial deep learning networks.
Modification Of Learning Rate With Lvq Model Improvement In Learning Backpropagation
NASA Astrophysics Data System (ADS)
Tata Hardinata, Jaya; Zarlis, Muhammad; Budhiarti Nababan, Erna; Hartama, Dedy; Sembiring, Rahmat W.
2017-12-01
One type of artificial neural network is a backpropagation, This algorithm trained with the network architecture used during the training as well as providing the correct output to insert a similar but not the same with the architecture in use at training.The selection of appropriate parameters also affects the outcome, value of learning rate is one of the parameters which influence the process of training, Learning rate affects the speed of learning process on the network architecture.If the learning rate is set too large, then the algorithm will become unstable and otherwise the algorithm will converge in a very long period of time.So this study was made to determine the value of learning rate on the backpropagation algorithm. LVQ models of learning rate is one of the models used in the determination of the value of the learning rate of the algorithm LVQ.By modifying this LVQ model to be applied to the backpropagation algorithm. From the experimental results known to modify the learning rate LVQ models were applied to the backpropagation algorithm learning process becomes faster (epoch less).
The Livermore Brain: Massive Deep Learning Networks Enabled by High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Barry Y.
The proliferation of inexpensive sensor technologies like the ubiquitous digital image sensors has resulted in the collection and sharing of vast amounts of unsorted and unexploited raw data. Companies and governments who are able to collect and make sense of large datasets to help them make better decisions more rapidly will have a competitive advantage in the information era. Machine Learning technologies play a critical role for automating the data understanding process; however, to be maximally effective, useful intermediate representations of the data are required. These representations or “features” are transformations of the raw data into a form where patternsmore » are more easily recognized. Recent breakthroughs in Deep Learning have made it possible to learn these features from large amounts of labeled data. The focus of this project is to develop and extend Deep Learning algorithms for learning features from vast amounts of unlabeled data and to develop the HPC neural network training platform to support the training of massive network models. This LDRD project succeeded in developing new unsupervised feature learning algorithms for images and video and created a scalable neural network training toolkit for HPC. Additionally, this LDRD helped create the world’s largest freely-available image and video dataset supporting open multimedia research and used this dataset for training our deep neural networks. This research helped LLNL capture several work-for-others (WFO) projects, attract new talent, and establish collaborations with leading academic and commercial partners. Finally, this project demonstrated the successful training of the largest unsupervised image neural network using HPC resources and helped establish LLNL leadership at the intersection of Machine Learning and HPC research.« less
Lin, Juin-Shu; Yen-Chi, Liao; Lee, Ting-Ting
2006-01-01
The rapid development of computer technology pushes Internet's popularity and makes daily services more timely and convenient. Meanwhile, it also becomes a trend for nursing practice to implement network education model to break the distance barriers and for nurses to obtain more knowledge. The purpose of this study was to investigate the relationship of nursing staff's information competency, satisfaction and outcomes of network education. After completing 4 weeks of network education, a total of 218 nurses answered the on-line questionnaires. The results revealed that nurses who joined the computer training course for less than 3 hours per week, without networking connection devices and with college degree, had the lower nursing informatics competency; while nurses who were older, at N4 position, with on-line course experience and participated for more than 4 hours each week, had higher nursing informatics competency. Those who participated in the network education course less than 4 hours per week were less satisfied. There were significant differences between nursing positions before and after having the network education. Nurses who had higher nursing information competency also had higher satisfaction toward the network education. Network education not only enhances learners' computer competency but also improves their learning satisfaction. By promoting the network education and improving nurses' hardware/software skills and knowledge, nurses can use networks to access learning resources. Healthcare institutions should also enhance computer infrastructure, and to establish the standards for certificate courses to increase the learning motivation and learning outcome.
Multiple neural network approaches to clinical expert systems
NASA Astrophysics Data System (ADS)
Stubbs, Derek F.
1990-08-01
We briefly review the concept of computer aided medical diagnosis and more extensively review the the existing literature on neural network applications in the field. Neural networks can function as simple expert systems for diagnosis or prognosis. Using a public database we develop a neural network for the diagnosis of a major presenting symptom while discussing the development process and possible approaches. MEDICAL EXPERTS SYSTEMS COMPUTER AIDED DIAGNOSIS Biomedicine is an incredibly diverse and multidisciplinary field and it is not surprising that neural networks with their many applications are finding more and more applications in the highly non-linear field of biomedicine. I want to concentrate on neural networks as medical expert systems for clinical diagnosis or prognosis. Expert Systems started out as a set of computerized " ifthen" rules. Everything was reduced to boolean logic and the promised land of computer experts was said to be in sight. It never came. Why? First the computer code explodes as the number of " ifs" increases. All the " ifs" have to interact. Second experts are not very good at reducing expertise to language. It turns out that experts recognize patterns and have non-verbal left-brain intuition decision processes. Third learning by example rather than learning by rule is the way natural brains works and making computers work by rule-learning is hideously labor intensive. Neural networks can learn from example. They learn the results
A Guide to the Literature on Learning Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Friedland, Peter (Technical Monitor)
1994-01-01
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and more generally, learning probabilistic graphical models. Because many problems in artificial intelligence, statistics and neural networks can be represented as a probabilistic graphical model, this area provides a unifying perspective on learning. This paper organizes the research in this area along methodological lines of increasing complexity.
ERIC Educational Resources Information Center
Liu, M.; Abe, K.; Cao, M. W.; Liu, S.; Ok, D. U.; Park, J.; Parrish, C.; Sardegna, V. G.
2015-01-01
Although educators are excited about the potential of social network sites for language learning (SNSLL), there is a lack of understanding of how SNSLL can be used to facilitate teaching and learning for English as Second language (ESL) instructors and students. The purpose of this study was to examine the affordances of four selected SNSLL…
ERIC Educational Resources Information Center
Howard, Lyz
2016-01-01
As an experienced face-to-face teacher, working in a small Crown Dependency with no Higher Education Institute (HEI) to call its own, the subsequent geographical and professional isolation in the context of Networked Learning (NL), as a sub-set of eLearning, calls for innovative ways in which to develop self-reliant methods of professional…
ERIC Educational Resources Information Center
Asensio, Mireia, Ed.; Foster, Jonathan, Ed.; Hodgson, Vivien, Ed.; McConnell, David, Ed.
This document contains 59 papers presented at a conference in England on approaches to lifelong learning and higher education through the Internet. Representative papers include the following: "The University of the Highlands and Islands Project: A Model for Networked Learning?" (Veronica Adamson, Jane Plenderleith); "The Costs of…
Towards deep learning with segregated dendrites
Guerguiev, Jordan; Lillicrap, Timothy P
2017-01-01
Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations—the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons. PMID:29205151
Towards deep learning with segregated dendrites.
Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A
2017-12-05
Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.
Inversion of surface parameters using fast learning neural networks
NASA Technical Reports Server (NTRS)
Dawson, M. S.; Olvera, J.; Fung, A. K.; Manry, M. T.
1992-01-01
A neural network approach to the inversion of surface scattering parameters is presented. Simulated data sets based on a surface scattering model are used so that the data may be viewed as taken from a completely known randomly rough surface. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) are tested on the simulated backscattering data. The RMS error of training the FL network is found to be less than one half the error of the BP network while requiring one to two orders of magnitude less CPU time. When applied to inversion of parameters from a statistically rough surface, the FL method is successful at recovering the surface permittivity, the surface correlation length, and the RMS surface height in less time and with less error than the BP network. Further applications of the FL neural network to the inversion of parameters from backscatter measurements of an inhomogeneous layer above a half space are shown.
Social learning strategies modify the effect of network structure on group performance.
Barkoczi, Daniel; Galesic, Mirta
2016-10-07
The structure of communication networks is an important determinant of the capacity of teams, organizations and societies to solve policy, business and science problems. Yet, previous studies reached contradictory results about the relationship between network structure and performance, finding support for the superiority of both well-connected efficient and poorly connected inefficient network structures. Here we argue that understanding how communication networks affect group performance requires taking into consideration the social learning strategies of individual team members. We show that efficient networks outperform inefficient networks when individuals rely on conformity by copying the most frequent solution among their contacts. However, inefficient networks are superior when individuals follow the best member by copying the group member with the highest payoff. In addition, groups relying on conformity based on a small sample of others excel at complex tasks, while groups following the best member achieve greatest performance for simple tasks. Our findings reconcile contradictory results in the literature and have broad implications for the study of social learning across disciplines.
Social learning strategies modify the effect of network structure on group performance
NASA Astrophysics Data System (ADS)
Barkoczi, Daniel; Galesic, Mirta
2016-10-01
The structure of communication networks is an important determinant of the capacity of teams, organizations and societies to solve policy, business and science problems. Yet, previous studies reached contradictory results about the relationship between network structure and performance, finding support for the superiority of both well-connected efficient and poorly connected inefficient network structures. Here we argue that understanding how communication networks affect group performance requires taking into consideration the social learning strategies of individual team members. We show that efficient networks outperform inefficient networks when individuals rely on conformity by copying the most frequent solution among their contacts. However, inefficient networks are superior when individuals follow the best member by copying the group member with the highest payoff. In addition, groups relying on conformity based on a small sample of others excel at complex tasks, while groups following the best member achieve greatest performance for simple tasks. Our findings reconcile contradictory results in the literature and have broad implications for the study of social learning across disciplines.
A neural network prototyping package within IRAF
NASA Technical Reports Server (NTRS)
Bazell, D.; Bankman, I.
1992-01-01
We outline our plans for incorporating a Neural Network Prototyping Package into the IRAF environment. The package we are developing will allow the user to choose between different types of networks and to specify the details of the particular architecture chosen. Neural networks consist of a highly interconnected set of simple processing units. The strengths of the connections between units are determined by weights which are adaptively set as the network 'learns'. In some cases, learning can be a separate phase of the user cycle of the network while in other cases the network learns continuously. Neural networks have been found to be very useful in pattern recognition and image processing applications. They can form very general 'decision boundaries' to differentiate between objects in pattern space and they can be used for associative recall of patterns based on partial cures and for adaptive filtering. We discuss the different architectures we plan to use and give examples of what they can do.
Excitement and synchronization of small-world neuronal networks with short-term synaptic plasticity.
Han, Fang; Wiercigroch, Marian; Fang, Jian-An; Wang, Zhijie
2011-10-01
Excitement and synchronization of electrically and chemically coupled Newman-Watts (NW) small-world neuronal networks with a short-term synaptic plasticity described by a modified Oja learning rule are investigated. For each type of neuronal network, the variation properties of synaptic weights are examined first. Then the effects of the learning rate, the coupling strength and the shortcut-adding probability on excitement and synchronization of the neuronal network are studied. It is shown that the synaptic learning suppresses the over-excitement, helps synchronization for the electrically coupled network but impairs synchronization for the chemically coupled one. Both the introduction of shortcuts and the increase of the coupling strength improve synchronization and they are helpful in increasing the excitement for the chemically coupled network, but have little effect on the excitement of the electrically coupled one.
NASA Astrophysics Data System (ADS)
Drexler, Wendy
This design-based research case study applied a networked learning approach to a seventh grade science class at a public school in the southeastern United States. Students adapted emerging Web applications to construct personal learning environments for in-depth scientific inquiry of poisonous and venomous life forms. The personal learning environments constructed used Application Programming Interface (API) widgets to access, organize, and synthesize content from a number of educational Internet resources and social network connections. This study examined the nature of personal learning environments; the processes students go through during construction, and patterns that emerged. The project was documented from both an instructional and student-design perspective. Findings revealed that students applied the processes of: practicing digital responsibility; practicing digital literacy; organizing content; collaborating and socializing; and synthesizing and creating. These processes informed a model of the networked student that will serve as a framework for future instructional designs. A networked learning approach that incorporates these processes into future designs has implications for student learning, teacher roles, professional development, administrative policies, and delivery. This work is significant in that it shifts the focus from technology innovations based on tools to student empowerment based on the processes required to support learning. It affirms the need for greater attention to digital literacy and responsibility in K12 schools as well as consideration for those skills students will need to achieve success in the 21st century. The design-based research case study provides a set of design principles for teachers to follow when facilitating student construction of personal learning environments.
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
Hawkins, Jeff; Ahmad, Subutai; Cui, Yuwei
2017-01-01
Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed. PMID:29118696
The Time Course of Task-Specific Memory Consolidation Effects in Resting State Networks
Sami, Saber; Robertson, Edwin M.
2014-01-01
Previous studies have reported functionally localized changes in resting-state brain activity following a short period of motor learning, but their relationship with memory consolidation and their dependence on the form of learning is unclear. We investigate these questions with implicit or explicit variants of the serial reaction time task (SRTT). fMRI resting-state functional connectivity was measured in human subjects before the tasks, and 0.1, 0.5, and 6 h after learning. There was significant improvement in procedural skill in both groups, with the group learning under explicit conditions showing stronger initial acquisition, and greater improvement at the 6 h retest. Immediately following acquisition, this group showed enhanced functional connectivity in networks including frontal and cerebellar areas and in the visual cortex. Thirty minutes later, enhanced connectivity was observed between cerebellar nuclei, thalamus, and basal ganglia, whereas at 6 h there was enhanced connectivity in a sensory-motor cortical network. In contrast, immediately after acquisition under implicit conditions, there was increased connectivity in a network including precentral and sensory-motor areas, whereas after 30 min a similar cerebello-thalamo-basal ganglionic network was seen as in explicit learning. Finally, 6 h after implicit learning, we found increased connectivity in medial temporal cortex, but reduction in precentral and sensory-motor areas. Our findings are consistent with predictions that two variants of the SRTT task engage dissociable functional networks, although there are also networks in common. We also show a converging and diverging pattern of flux between prefrontal, sensory-motor, and parietal areas, and subcortical circuits across a 6 h consolidation period. PMID:24623776
Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.
Carpenter, Gail A.
1997-11-01
A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.
Bayesian networks in neuroscience: a survey.
Bielza, Concha; Larrañaga, Pedro
2014-01-01
Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind-morphological, electrophysiological, -omics and neuroimaging-, thereby broadening the scope-molecular, cellular, structural, functional, cognitive and medical- of the brain aspects to be studied.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1992-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1995-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Bayesian networks in neuroscience: a survey
Bielza, Concha; Larrañaga, Pedro
2014-01-01
Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind–morphological, electrophysiological, -omics and neuroimaging–, thereby broadening the scope–molecular, cellular, structural, functional, cognitive and medical– of the brain aspects to be studied. PMID:25360109
In-situ trainable intrusion detection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Symons, Christopher T.; Beaver, Justin M.; Gillen, Rob
A computer implemented method detects intrusions using a computer by analyzing network traffic. The method includes a semi-supervised learning module connected to a network node. The learning module uses labeled and unlabeled data to train a semi-supervised machine learning sensor. The method records events that include a feature set made up of unauthorized intrusions and benign computer requests. The method identifies at least some of the benign computer requests that occur during the recording of the events while treating the remainder of the data as unlabeled. The method trains the semi-supervised learning module at the network node in-situ, such thatmore » the semi-supervised learning modules may identify malicious traffic without relying on specific rules, signatures, or anomaly detection.« less
Salient object detection based on multi-scale contrast.
Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long
2018-05-01
Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Gerraty, Raphael T; Davidow, Juliet Y; Wimmer, G Elliott; Kahn, Itamar; Shohamy, Daphna
2014-08-20
An important aspect of adaptive learning is the ability to flexibly use past experiences to guide new decisions. When facing a new decision, some people automatically leverage previously learned associations, while others do not. This variability in transfer of learning across individuals has been demonstrated repeatedly and has important implications for understanding adaptive behavior, yet the source of these individual differences remains poorly understood. In particular, it is unknown why such variability in transfer emerges even among homogeneous groups of young healthy participants who do not vary on other learning-related measures. Here we hypothesized that individual differences in the transfer of learning could be related to relatively stable differences in intrinsic brain connectivity, which could constrain how individuals learn. To test this, we obtained a behavioral measure of memory-based transfer outside of the scanner and on a separate day acquired resting-state functional MRI images in 42 participants. We then analyzed connectivity across independent component analysis-derived brain networks during rest, and tested whether intrinsic connectivity in learning-related networks was associated with transfer. We found that individual differences in transfer were related to intrinsic connectivity between the hippocampus and the ventromedial prefrontal cortex, and between these regions and large-scale functional brain networks. Together, the findings demonstrate a novel role for intrinsic brain dynamics in flexible learning-guided behavior, both within a set of functionally specific regions known to be important for learning, as well as between these regions and the default and frontoparietal networks, which are thought to serve more general cognitive functions. Copyright © 2014 the authors 0270-6474/14/3411297-07$15.00/0.
Biologically Inspired SNN for Robot Control.
Nichols, Eric; McDaid, Liam J; Siddique, Nazmul
2013-02-01
This paper proposes a spiking-neural-network-based robot controller inspired by the control structures of biological systems. Information is routed through the network using facilitating dynamic synapses with short-term plasticity. Learning occurs through long-term synaptic plasticity which is implemented using the temporal difference learning rule to enable the robot to learn to associate the correct movement with the appropriate input conditions. The network self-organizes to provide memories of environments that the robot encounters. A Pioneer robot simulator with laser and sonar proximity sensors is used to verify the performance of the network with a wall-following task, and the results are presented.
Social Software: Participants' Experience Using Social Networking for Learning
ERIC Educational Resources Information Center
Batchelder, Cecil W.
2010-01-01
Social networking tools used in learning provides instructional design with tools for transformative change in education. This study focused on defining the meanings and essences of social networking through the lived common experiences of 7 college students. The problem of the study was a lack of learner voice in understanding the value of social…
Nurturing Global Collaboration and Networked Learning in Higher Education
ERIC Educational Resources Information Center
Cronin, Catherine; Cochrane, Thomas; Gordon, Averill
2016-01-01
We consider the principles of communities of practice (CoP) and networked learning in higher education, illustrated with a case study. iCollab has grown from an international community of practice connecting students and lecturers in seven modules across seven higher education institutions in six countries, to a global network supporting the…
Innovative Professional Development: Expanding Your Professional Learning Network
ERIC Educational Resources Information Center
Perez, Lisa
2012-01-01
To assume the role of technology leaders and information literacy specialists in their schools, librarians need access to the most current information. And, they do this by helping each other. There are many definitions, but professional learning networks (PLNs) involve sharing work-related ideas with a network of colleagues via various digital…
ERIC Educational Resources Information Center
Grunspan, Daniel Z.; Wiggins, Benjamin L.; Goodreau, Steven M.
2014-01-01
Social interactions between students are a major and underexplored part of undergraduate education. Understanding how learning relationships form in undergraduate classrooms, as well as the impacts these relationships have on learning outcomes, can inform educators in unique ways and improve educational reform. Social network analysis (SNA)…
Using Action Research and Action Learning for Entrepreneurial Network Capability Development
ERIC Educational Resources Information Center
McGrath, Helen; O'Toole, Thomas
2016-01-01
This paper applies an action research (AR) design and action learning (AL) approach to network capability development in an entrepreneurial context. Recent research suggests that networks are a viable strategy for the entrepreneurial firm to overcome the liabilities associated with newness and smallness. However, a gap emerges as few, if any,…
Peer-Learning Networks in Social Work Doctoral Education: An Interdisciplinary Model
ERIC Educational Resources Information Center
Miller, J. Jay; Duron, Jacquelynn F.; Bosk, Emily Adlin; Finno-Velasquez, Megan; Abner, Kristin S.
2016-01-01
Peer-learning networks (PLN) can be valuable tools for doctoral students. Participation in these networks can aid in the completion of the dissertation, lead to increased scholarship productivity, and assist in student retention. Yet, despite the promise of PLNs, few studies have documented their effect on social work doctoral education. This…
How Social Network Position Relates to Knowledge Building in Online Learning Communities
ERIC Educational Resources Information Center
Wang, Lu
2010-01-01
Social Network Analysis, Statistical Analysis, Content Analysis and other research methods were used to research online learning communities at Capital Normal University, Beijing. Analysis of the two online courses resulted in the following conclusions: (1) Social networks of the two online courses form typical core-periphery structures; (2)…
ERIC Educational Resources Information Center
Tell, Joakim; Halila, Fawzi
2001-01-01
Small businesses implementing ISO 14001 standards worked with a university to develop a learning network. The network served as a source of inspiration and reflection as well as a sounding board. It enabled small enterprises to act collectively, compensating for individual lack of resources. (SK)
Social Networking Sites and Language Learning
ERIC Educational Resources Information Center
Brick, Billy
2011-01-01
This article examines a study of seven learners who logged their experiences on the language leaning social networking site Livemocha over a period of three months. The features of the site are described and the likelihood of their future success is considered. The learners were introduced to the Social Networking Site (SNS) and asked to learn a…
Networked eLlearning and Collaborative Knowledge Building: Design and Facilitation
ERIC Educational Resources Information Center
Sorensen, Elsebeth Korsgaard
2005-01-01
This paper addresses the core goals for educators to stimulate participation across diversity (including life trajectories and culture) and motivate learners to engage in negotiation of meaning and knowledge building dialogue in the processes of networked learning. The paper reports on a Danish masters online course on networked learning for…
Request for Support: A Tool for Strengthening Network Capacity
ERIC Educational Resources Information Center
Bain, Jamie; Harden, Noelle; Heim, Stephanie
2017-01-01
A request for support (RFS) is a tool that is used to strengthen network capacity by prioritizing needs and optimizing learning opportunities. Within University of Minnesota Extension, we implemented an RFS process through an online survey designed to help leaders of food networks identify and rank learning and capacity-building needs and indicate…
Evolutionary neural networks for anomaly detection based on the behavior of a program.
Han, Sang-Jun; Cho, Sung-Bae
2006-06-01
The process of learning the behavior of a given program by using machine-learning techniques (based on system-call audit data) is effective to detect intrusions. Rule learning, neural networks, statistics, and hidden Markov models (HMMs) are some of the kinds of representative methods for intrusion detection. Among them, neural networks are known for good performance in learning system-call sequences. In order to apply this knowledge to real-world problems successfully, it is important to determine the structures and weights of these call sequences. However, finding the appropriate structures requires very long time periods because there are no suitable analytical solutions. In this paper, a novel intrusion-detection technique based on evolutionary neural networks (ENNs) is proposed. One advantage of using ENNs is that it takes less time to obtain superior neural networks than when using conventional approaches. This is because they discover the structures and weights of the neural networks simultaneously. Experimental results with the 1999 Defense Advanced Research Projects Agency (DARPA) Intrusion Detection Evaluation (IDEVAL) data confirm that ENNs are promising tools for intrusion detection.
Fe substitution and pressure effects on superconductor Re6Hf
NASA Astrophysics Data System (ADS)
Yang, Jinhu; Guo, Yang; Wang, Hangdong; Chen, Bin
2018-04-01
Polycrystalline samples of (Re1-xFex) 6Hf were synthesized by arc-melting method and the phase purity of the samples was confirmed by powder X-ray diffraction method. In this paper, we report the Fe substitution and pressure effect on non-centrosymmetric superconductor Re6Hf. The superconducting transition temperature, TC, is confirmed by the measurements of magnetic susceptibility, electrical resistivity for x ≤ 0.22 samples with the temperature down to 2 K. We find that the TC is suppressed with the increase of Fe content. The upper critical field Hc2 is larger than the value predicted by the WHH theory and shows a linear temperature dependence down to 2 K. When upon the application of external pressure up to 2.5 GPa, the TC decreases monotonically at a rate dlnTC/dP of 0.01 GPa-1.
NASA Astrophysics Data System (ADS)
Virkar, Yogesh S.; Shew, Woodrow L.; Restrepo, Juan G.; Ott, Edward
2016-10-01
Learning and memory are acquired through long-lasting changes in synapses. In the simplest models, such synaptic potentiation typically leads to runaway excitation, but in reality there must exist processes that robustly preserve overall stability of the neural system dynamics. How is this accomplished? Various approaches to this basic question have been considered. Here we propose a particularly compelling and natural mechanism for preserving stability of learning neural systems. This mechanism is based on the global processes by which metabolic resources are distributed to the neurons by glial cells. Specifically, we introduce and study a model composed of two interacting networks: a model neural network interconnected by synapses that undergo spike-timing-dependent plasticity; and a model glial network interconnected by gap junctions that diffusively transport metabolic resources among the glia and, ultimately, to neural synapses where they are consumed. Our main result is that the biophysical constraints imposed by diffusive transport of metabolic resources through the glial network can prevent runaway growth of synaptic strength, both during ongoing activity and during learning. Our findings suggest a previously unappreciated role for glial transport of metabolites in the feedback control stabilization of neural network dynamics during learning.
Vibration control of building structures using self-organizing and self-learning neural networks
NASA Astrophysics Data System (ADS)
Madan, Alok
2005-11-01
Past research in artificial intelligence establishes that artificial neural networks (ANN) are effective and efficient computational processors for performing a variety of tasks including pattern recognition, classification, associative recall, combinatorial problem solving, adaptive control, multi-sensor data fusion, noise filtering and data compression, modelling and forecasting. The paper presents a potentially feasible approach for training ANN in active control of earthquake-induced vibrations in building structures without the aid of teacher signals (i.e. target control forces). A counter-propagation neural network is trained to output the control forces that are required to reduce the structural vibrations in the absence of any feedback on the correctness of the output control forces (i.e. without any information on the errors in output activations of the network). The present study shows that, in principle, the counter-propagation network (CPN) can learn from the control environment to compute the required control forces without the supervision of a teacher (unsupervised learning). Simulated case studies are presented to demonstrate the feasibility of implementing the unsupervised learning approach in ANN for effective vibration control of structures under the influence of earthquake ground motions. The proposed learning methodology obviates the need for developing a mathematical model of structural dynamics or training a separate neural network to emulate the structural response for implementation in practice.
Hommes, J; Rienties, B; de Grave, W; Bos, G; Schuwirth, L; Scherpbier, A
2012-12-01
World-wide, universities in health sciences have transformed their curriculum to include collaborative learning and facilitate the students' learning process. Interaction has been acknowledged to be the synergistic element in this learning context. However, students spend the majority of their time outside their classroom and interaction does not stop outside the classroom. Therefore we studied how informal social interaction influences student learning. Moreover, to explore what really matters in the students learning process, a model was tested how the generally known important constructs-prior performance, motivation and social integration-relate to informal social interaction and student learning. 301 undergraduate medical students participated in this cross-sectional quantitative study. Informal social interaction was assessed using self-reported surveys following the network approach. Students' individual motivation, social integration and prior performance were assessed by the Academic Motivation Scale, the College Adaption Questionnaire and students' GPA respectively. A factual knowledge test represented student' learning. All social networks were positively associated with student learning significantly: friendships (β = 0.11), providing information to other students (β = 0.16), receiving information from other students (β = 0.25). Structural equation modelling revealed a model in which social networks increased student learning (r = 0.43), followed by prior performance (r = 0.31). In contrast to prior literature, students' academic motivation and social integration were not associated with students' learning. Students' informal social interaction is strongly associated with students' learning. These findings underline the need to change our focus from the formal context (classroom) to the informal context to optimize student learning and deliver modern medics.
Energy consumption analysis for various memristive networks under different learning strategies
NASA Astrophysics Data System (ADS)
Deng, Lei; Wang, Dong; Zhang, Ziyang; Tang, Pei; Li, Guoqi; Pei, Jing
2016-02-01
Recently, various memristive systems emerge to emulate the efficient computing paradigm of the brain cortex; whereas, how to make them energy efficient still remains unclear, especially from an overall perspective. Here, a systematical and bottom-up energy consumption analysis is demonstrated, including the memristor device level and the network learning level. We propose an energy estimating methodology when modulating the memristive synapses, which is simulated in three typical neural networks with different synaptic structures and learning strategies for both offline and online learning. These results provide an in-depth insight to create energy efficient brain-inspired neuromorphic devices in the future.
Unsupervised learning in general connectionist systems.
Dente, J A; Mendes, R Vilela
1996-01-01
There is a common framework in which different connectionist systems may be treated in a unified way. The general system in which they may all be mapped is a network which, in addition to the connection strengths, has an adaptive node parameter controlling the output intensity. In this paper we generalize two neural network learning schemes to networks with node parameters. In generalized Hebbian learning we find improvements to the convergence rate for small eigenvalues in principal component analysis. For competitive learning the use of node parameters also seems useful in that, by emphasizing or de-emphasizing the dominance of winning neurons, either improved robustness or discrimination is obtained.
Kenney, Michael; Horgan, John; Horne, Cale; Vining, Peter; Carley, Kathleen M; Bigrigg, Michael W; Bloom, Mia; Braddock, Kurt
2013-09-01
Social networks are said to facilitate learning and adaptation by providing the connections through which network nodes (or agents) share information and experience. Yet, our understanding of how this process unfolds in real-world networks remains underdeveloped. This paper explores this gap through a case study of al-Muhajiroun, an activist network that continues to call for the establishment of an Islamic state in Britain despite being formally outlawed by British authorities. Drawing on organisation theory and social network analysis, we formulate three hypotheses regarding the learning capacity and social network properties of al-Muhajiroun (AM) and its successor groups. We then test these hypotheses using mixed methods. Our methods combine quantitative analysis of three agent-based networks in AM measured for structural properties that facilitate learning, including connectedness, betweenness centrality and eigenvector centrality, with qualitative analysis of interviews with AM activists focusing organisational adaptation and learning. The results of these analyses confirm that al-Muhajiroun activists respond to government pressure by changing their operations, including creating new platforms under different names and adjusting leadership roles among movement veterans to accommodate their spiritual leader's unwelcome exodus to Lebanon. Simple as they are effective, these adaptations have allowed al-Muhajiroun and its successor groups to continue their activism in an increasingly hostile environment. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Soto, Fabian A.; Bassett, Danielle S.; Ashby, F. Gregory
2016-01-01
Recent work has shown that multimodal association areas–including frontal, temporal and parietal cortex–are focal points of functional network reconfiguration during human learning and performance of cognitive tasks. On the other hand, neurocomputational theories of category learning suggest that the basal ganglia and related subcortical structures are focal points of functional network reconfiguration during early learning of some categorization tasks, but become less so with the development of automatic categorization performance. Using a combination of network science and multilevel regression, we explore how changes in the connectivity of small brain regions can predict behavioral changes during training in a visual categorization task. We find that initial category learning, as indexed by changes in accuracy, is predicted by increasingly efficient integrative processing in subcortical areas, with higher functional specialization, more efficient integration across modules, but a lower cost in terms of redundancy of information processing. The development of automaticity, as indexed by changes in the speed of correct responses, was predicted by lower clustering (particularly in subcortical areas), higher strength (highest in cortical areas) and higher betweenness centrality. By combining neurocomputational theories and network scientific methods, these results synthesize the dissociative roles of multimodal association areas and subcortical structures in the development of automaticity during category learning. PMID:27453156
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach. PMID:25734662
Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics
Burms, Jeroen; Caluwaerts, Ken; Dambre, Joni
2015-01-01
In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics. PMID:26347645
Voss, Michelle W; Prakash, Ruchika Shaurya; Erickson, Kirk I; Boot, Walter R; Basak, Chandramallika; Neider, Mark B; Simons, Daniel J; Fabiani, Monica; Gratton, Gabriele; Kramer, Arthur F
2012-01-02
We used the Space Fortress videogame, originally developed by cognitive psychologists to study skill acquisition, as a platform to examine learning-induced plasticity of interacting brain networks. Novice videogame players learned Space Fortress using one of two training strategies: (a) focus on all aspects of the game during learning (fixed priority), or (b) focus on improving separate game components in the context of the whole game (variable priority). Participants were scanned during game play using functional magnetic resonance imaging (fMRI), both before and after 20 h of training. As expected, variable priority training enhanced learning, particularly for individuals who initially performed poorly. Functional connectivity analysis revealed changes in brain network interaction reflective of more flexible skill learning and retrieval with variable priority training, compared to procedural learning and skill implementation with fixed priority training. These results provide the first evidence for differences in the interaction of large-scale brain networks when learning with different training strategies. Our approach and findings also provide a foundation for exploring the brain plasticity involved in transfer of trained abilities to novel real-world tasks such as driving, sport, or neurorehabilitation. Copyright © 2011 Elsevier Inc. All rights reserved.
Artificial neuron-glia networks learning approach based on cooperative coevolution.
Mesejo, Pablo; Ibáñez, Oscar; Fernández-Blanco, Enrique; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana B
2015-06-01
Artificial Neuron-Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.
Student Learning Networks on Residential Field Courses: Does Size Matter?
ERIC Educational Resources Information Center
Langan, A. Mark; Cullen, W. Rod; Shuker, David M.
2008-01-01
This article describes learner and tutor reports of a learning network that formed during the completion of investigative projects on a residential field course. Staff and students recorded project-related interactions, who they were with and how long they lasted over four phases during the field course. An enquiry based learning format challenged…
The Effect of Virtual versus Traditional Learning in Achieving Competency-Based Skills
ERIC Educational Resources Information Center
Mosalanejad, Leili; Shahsavari, Sakine; Sobhanian, Saeed; Dastpak, Mehdi
2012-01-01
Background: By rapid developing of the network technology, the internet-based learning methods are substituting the traditional classrooms making them expand to the virtual network learning environment. The purpose of this study was to determine the effectiveness of virtual systems on competency-based skills of first-year nursing students.…
Academic Social Networking Brings Web 2.0 Technologies to the Middle Grades
ERIC Educational Resources Information Center
Taranto, Gregory; Dalbon, Melissa; Gaetano, Julie
2011-01-01
The middle grades are an exciting time for adolescents to explore, learn, and collaborate with one another (National Middle School Association, 2010). By incorporating an academic social network as part of the classroom experience, collaboration and active learning take on new forms, and a transformation from passive learning to active learning…
Networking for English Literature Class: Cooperative Learning in Chinese Context
ERIC Educational Resources Information Center
Li, Huiyin
2017-01-01
This action research was conducted to investigate the efficacy of networking, an adjusted cooperative learning method employed in an English literature class for non-English majors in China. Questionnaire was administered online anonymously to college students after a 14-week cooperative learning in literature class in a Chinese university, aiming…
Institutionalizing Community-Based Learning and Research: The Case for External Networks
ERIC Educational Resources Information Center
Shrader, Elizabeth; Saunders, Mary Anne; Marullo, Sam; Benatti, Sylvia; Weigert, Kathleen Maas
2008-01-01
Conversations continue as to whether and how community-based learning and research (CBLR) can be most effectively integrated into the mission and practice of institutions of higher education (IHEs). In 2005, eight District of Columbia- (DC-) area universities affiliated with the Community Research and Learning (CoRAL) Network engaged in a planning…
Learning through Social Networking Sites--The Critical Role of the Teacher
ERIC Educational Resources Information Center
Callaghan, Noelene; Bower, Matt
2012-01-01
This comparative case study examined factors affecting behaviour and learning in social networking sites (SNS). The behaviour and learning of two classes completing identical SNS based modules of work was observed and compared. All student contributions to the SNS were analysed, with the cognitive process dimension of the Revised Bloom's Taxonomy…
Implicit and Explicit Learning Mechanisms Meet in Monkey Prefrontal Cortex.
Chafee, Matthew V; Crowe, David A
2017-10-11
In this issue, Loonis et al. (2017) provide the first description of unique synchrony patterns differentiating implicit and explicit forms of learning in monkey prefrontal networks. Their results have broad implications for how prefrontal networks integrate the two learning mechanisms to control behavior. Copyright © 2017 Elsevier Inc. All rights reserved.
ICT & Learning in Chilean Schools: Lessons Learned
ERIC Educational Resources Information Center
Sanchez, Jaime; Salinas, Alvaro
2008-01-01
By the early nineties a Chilean network on computers and education for public schools had emerged. There were both high expectancies that technology could revolutionize education as well as divergent voices that doubted the real impact of technology on learning. This paper presents an evaluation of the Enlaces network, a national Information and…
The Unexpected Connection: Serendipity and Human Mediation in Networked Learning
ERIC Educational Resources Information Center
Kop, Rita
2012-01-01
Major changes on the Web in recent years have contributed to an abundance of information for people to harness in their learning. Emerging technologies have instigated the need for critical literacies to support learners on open online networks in the mastering of critical information gathering during their learning journeys. This paper will argue…
Approximate Optimal Control as a Model for Motor Learning
ERIC Educational Resources Information Center
Berthier, Neil E.; Rosenstein, Michael T.; Barto, Andrew G.
2005-01-01
Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of…
ERIC Educational Resources Information Center
Nesic, Sasa; Gasevic, Dragan; Jazayeri, Mehdi; Landoni, Monica
2011-01-01
Semantic web technologies have been applied to many aspects of learning content authoring including semantic annotation, semantic search, dynamic assembly, and personalization of learning content. At the same time, social networking services have started to play an important role in the authoring process by supporting authors' collaborative…
Recasting Distance Learning with Network-Enabled Open Education: An Interview with Vijay Kumar
ERIC Educational Resources Information Center
Morrison, James L.; Kumar, Vijay
2008-01-01
In an interview with James Morrison, "Innovate's" editor-in-chief, Vijay Kumar describes how rethinking distance learning as network-enabled open education can catalyze a whole new set of learning opportunities. The growing open-education movement has made an increasing number and variety of resources freely available online, including everything…
Building a Personal Learning Network for Intellectual Freedom: Join the Conversation
ERIC Educational Resources Information Center
Keuler, Annalisa
2012-01-01
Building a personal learning network (PLN) for intellectual freedom has long been an important role of a school librarian; however, in the steadily increasing onslaught of digital information that librarians face today, and in the future, the task has become mission-critical. Personal learning, it stands to reason, requires an appropriate dialogue…
Making Practice Public: Teacher Learning in the 21st Century
ERIC Educational Resources Information Center
Lieberman, Ann; Pointer Mace, Desiree
2010-01-01
We propose that the advent and ubiquity of new media tools and social networking resources provide a means for professional, networked learning to "scale up." We preface our discussion with a review of research that has led us to argue for professional learning communities, document the policies and practices of professional development…
ERIC Educational Resources Information Center
Cardona-Divale, Maria Victoria
2012-01-01
Learners often report difficulty maintaining social connectivity in online courses. Technology is quickly changing how people communicate, collaborate and learn using online social networking sites (SNSs). These sites have transformed education in a way that provides new learning opportunities when integrated with web 2.0 tools. Little research is…
From Personal to Social: Learning Environments that Work
ERIC Educational Resources Information Center
Camacho, Mar; Guilana, Sonia
2011-01-01
VLE (Virtual Learning Environments) are rapidly falling short to meet the demands of a networked society. Web 2.0 and social networks are proving to offer a more personalized, open environment for students to learn formally as they are already doing informally. With the irruption of social media into society, and therefore, education, many voices…
Chambers, R. Andrew; Conroy, Susan K.
2010-01-01
Apoptotic and neurogenic events in the adult hippocampus are hypothesized to play a role in cognitive responses to new contexts. Corticosteroid-mediated stress responses and other neural processes invoked by substantially novel contextual changes may regulate these processes. Using elementary three-layer neural networks that learn by incremental synaptic plasticity, we explored whether the cognitive effects of differential regimens of neuronal turnover depend on the environmental context in terms of the degree of novelty in the new information to be learned. In “adult” networks that had achieved mature synaptic connectivity upon prior learning of the Roman alphabet, imposition of apoptosis/neurogenesis before learning increasingly novel information (alternate Roman < Russian < Hebrew) reveals optimality of informatic cost benefits when rates of turnover are geared in proportion to the degree of novelty. These findings predict that flexible control of rates of apoptosis–neurogenesis within plastic, mature neural systems optimizes learning attributes under varying degrees of contextual change, and that failures in this regulation may define a role for adult hippocampal neurogenesis in novelty- and stress-responsive psychiatric disorders. PMID:17214558
Pimashkin, Alexey; Gladkov, Arseniy; Mukhina, Irina; Kazantsev, Victor
2013-01-01
Learning in neuronal networks can be investigated using dissociated cultures on multielectrode arrays supplied with appropriate closed-loop stimulation. It was shown in previous studies that weakly respondent neurons on the electrodes can be trained to increase their evoked spiking rate within a predefined time window after the stimulus. Such neurons can be associated with weak synaptic connections in nearby culture network. The stimulation leads to the increase in the connectivity and in the response. However, it was not possible to perform the learning protocol for the neurons on electrodes with relatively strong synaptic inputs and responding at higher rates. We proposed an adaptive closed-loop stimulation protocol capable to achieve learning even for the highly respondent electrodes. It means that the culture network can reorganize appropriately its synaptic connectivity to generate a desired response. We introduced an adaptive reinforcement condition accounting for the response variability in control stimulation. It significantly enhanced the learning protocol to a large number of responding electrodes independently on its base response level. We also found that learning effect preserved after 4–6 h after training. PMID:23745105
Chambers, R Andrew; Conroy, Susan K
2007-01-01
Apoptotic and neurogenic events in the adult hippocampus are hypothesized to play a role in cognitive responses to new contexts. Corticosteroid-mediated stress responses and other neural processes invoked by substantially novel contextual changes may regulate these processes. Using elementary three-layer neural networks that learn by incremental synaptic plasticity, we explored whether the cognitive effects of differential regimens of neuronal turnover depend on the environmental context in terms of the degree of novelty in the new information to be learned. In "adult" networks that had achieved mature synaptic connectivity upon prior learning of the Roman alphabet, imposition of apoptosis/neurogenesis before learning increasingly novel information (alternate Roman < Russian < Hebrew) reveals optimality of informatic cost benefits when rates of turnover are geared in proportion to the degree of novelty. These findings predict that flexible control of rates of apoptosis-neurogenesis within plastic, mature neural systems optimizes learning attributes under varying degrees of contextual change, and that failures in this regulation may define a role for adult hippocampal neurogenesis in novelty- and stress-responsive psychiatric disorders.
Super-resolution reconstruction of MR image with a novel residual learning network algorithm
NASA Astrophysics Data System (ADS)
Shi, Jun; Liu, Qingping; Wang, Chaofeng; Zhang, Qi; Ying, Shihui; Xu, Haoyu
2018-04-01
Spatial resolution is one of the key parameters of magnetic resonance imaging (MRI). The image super-resolution (SR) technique offers an alternative approach to improve the spatial resolution of MRI due to its simplicity. Convolutional neural networks (CNN)-based SR algorithms have achieved state-of-the-art performance, in which the global residual learning (GRL) strategy is now commonly used due to its effectiveness for learning image details for SR. However, the partial loss of image details usually happens in a very deep network due to the degradation problem. In this work, we propose a novel residual learning-based SR algorithm for MRI, which combines both multi-scale GRL and shallow network block-based local residual learning (LRL). The proposed LRL module works effectively in capturing high-frequency details by learning local residuals. One simulated MRI dataset and two real MRI datasets have been used to evaluate our algorithm. The experimental results show that the proposed SR algorithm achieves superior performance to all of the other compared CNN-based SR algorithms in this work.
Evolving autonomous learning in cognitive networks.
Sheneman, Leigh; Hintze, Arend
2017-12-01
There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. These methods have been previously combined, particularly in artificial neural networks using an external objective feedback mechanism. We adapt this approach to Markov Brains, which are evolvable networks of probabilistic and deterministic logic gates. Prior to this work MB could only adapt from one generation to the other, so we introduce feedback gates which augment their ability to learn during their lifetime. We show that Markov Brains can incorporate these feedback gates in such a way that they do not rely on an external objective feedback signal, but instead can generate internal feedback that is then used to learn. This results in a more biologically accurate model of the evolution of learning, which will enable us to study the interplay between evolution and learning and could be another step towards autonomously learning machines.
Boehm, Stephan G; Smith, Ciaran; Muench, Niklas; Noble, Kirsty; Atherton, Catherine
2017-08-31
Repetition priming increases the accuracy and speed of responses to repeatedly processed stimuli. Repetition priming can result from two complementary sources: rapid response learning and facilitation within perceptual and conceptual networks. In conceptual classification tasks, rapid response learning dominates priming of object recognition, but it does not dominate priming of person recognition. This suggests that the relative engagement of network facilitation and rapid response learning depends on the stimulus domain. Here, we addressed the importance of the stimulus domain for rapid response learning by investigating priming in another domain, brands. In three experiments, participants performed conceptual decisions for brand logos. Strong priming was present, but it was not dominated by rapid response learning. These findings add further support to the importance of the stimulus domain for the relative importance of network facilitation and rapid response learning, and they indicate that brand priming is more similar to person recognition priming than object recognition priming, perhaps because priming of both brands and persons requires individuation.
Parsing learning in networks using brain-machine interfaces.
Orsborn, Amy L; Pesaran, Bijan
2017-10-01
Brain-machine interfaces (BMIs) define new ways to interact with our environment and hold great promise for clinical therapies. Motor BMIs, for instance, re-route neural activity to control movements of a new effector and could restore movement to people with paralysis. Increasing experience shows that interfacing with the brain inevitably changes the brain. BMIs engage and depend on a wide array of innate learning mechanisms to produce meaningful behavior. BMIs precisely define the information streams into and out of the brain, but engage wide-spread learning. We take a network perspective and review existing observations of learning in motor BMIs to show that BMIs engage multiple learning mechanisms distributed across neural networks. Recent studies demonstrate the advantages of BMI for parsing this learning and its underlying neural mechanisms. BMIs therefore provide a powerful tool for studying the neural mechanisms of learning that highlights the critical role of learning in engineered neural therapies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Two-Stage Approach to Image Classification by Deep Neural Networks
NASA Astrophysics Data System (ADS)
Ososkov, Gennady; Goncharov, Pavel
2018-02-01
The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.
Ormoneit, D
1999-12-01
We consider the training of neural networks in cases where the nonlinear relationship of interest gradually changes over time. One possibility to deal with this problem is by regularization where a variation penalty is added to the usual mean squared error criterion. To learn the regularized network weights we suggest the Iterative Extended Kalman Filter (IEKF) as a learning rule, which may be derived from a Bayesian perspective on the regularization problem. A primary application of our algorithm is in financial derivatives pricing, where neural networks may be used to model the dependency of the derivatives' price on one or several underlying assets. After giving a brief introduction to the problem of derivatives pricing we present experiments with German stock index options data showing that a regularized neural network trained with the IEKF outperforms several benchmark models and alternative learning procedures. In particular, the performance may be greatly improved using a newly designed neural network architecture that accounts for no-arbitrage pricing restrictions.
Using i2b2 to Bootstrap Rural Health Analytics and Learning Networks
Harris, Daniel R.; Baus, Adam D.; Harper, Tamela J.; Jarrett, Traci D.; Pollard, Cecil R.; Talbert, Jeffery C.
2017-01-01
We demonstrate that the open-source i2b2 (Informatics for Integrating Biology and the Bedside) data model can be used to bootstrap rural health analytics and learning networks. These networks promote communication and research initiatives by providing the infrastructure necessary for sharing data and insights across a group of healthcare and research partners. Data integration remains a crucial challenge in connecting rural healthcare sites with a common data sharing and learning network due to the lack of interoperability and standards within electronic health records. The i2b2 data model acts as a point of convergence for disparate data from multiple healthcare sites. A consistent and natural data model for healthcare data is essential for overcoming integration issues, but challenges such as those caused by weak data standardization must still be addressed. We describe our experience in the context of building the West Virginia/Kentucky Health Analytics and Learning Network, a collaborative, multi-state effort connecting rural healthcare sites. PMID:28261006
Using i2b2 to Bootstrap Rural Health Analytics and Learning Networks.
Harris, Daniel R; Baus, Adam D; Harper, Tamela J; Jarrett, Traci D; Pollard, Cecil R; Talbert, Jeffery C
2016-08-01
We demonstrate that the open-source i2b2 (Informatics for Integrating Biology and the Bedside) data model can be used to bootstrap rural health analytics and learning networks. These networks promote communication and research initiatives by providing the infrastructure necessary for sharing data and insights across a group of healthcare and research partners. Data integration remains a crucial challenge in connecting rural healthcare sites with a common data sharing and learning network due to the lack of interoperability and standards within electronic health records. The i2b2 data model acts as a point of convergence for disparate data from multiple healthcare sites. A consistent and natural data model for healthcare data is essential for overcoming integration issues, but challenges such as those caused by weak data standardization must still be addressed. We describe our experience in the context of building the West Virginia/Kentucky Health Analytics and Learning Network, a collaborative, multi-state effort connecting rural healthcare sites.
The value of prior knowledge in machine learning of complex network systems.
Ferranti, Dana; Krane, David; Craft, David
2017-11-15
Our overall goal is to develop machine-learning approaches based on genomics and other relevant accessible information for use in predicting how a patient will respond to a given proposed drug or treatment. Given the complexity of this problem, we begin by developing, testing and analyzing learning methods using data from simulated systems, which allows us access to a known ground truth. We examine the benefits of using prior system knowledge and investigate how learning accuracy depends on various system parameters as well as the amount of training data available. The simulations are based on Boolean networks-directed graphs with 0/1 node states and logical node update rules-which are the simplest computational systems that can mimic the dynamic behavior of cellular systems. Boolean networks can be generated and simulated at scale, have complex yet cyclical dynamics and as such provide a useful framework for developing machine-learning algorithms for modular and hierarchical networks such as biological systems in general and cancer in particular. We demonstrate that utilizing prior knowledge (in the form of network connectivity information), without detailed state equations, greatly increases the power of machine-learning algorithms to predict network steady-state node values ('phenotypes') and perturbation responses ('drug effects'). Links to codes and datasets here: https://gray.mgh.harvard.edu/people-directory/71-david-craft-phd. dcraft@broadinstitute.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang
2015-05-01
In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.
Deep Learning of Orthographic Representations in Baboons
Hannagan, Thomas; Ziegler, Johannes C.; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan
2014-01-01
What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords [1]. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process. PMID:24416300
Bassett, Danielle S.; Mattar, Marcelo G.
2017-01-01
Humans adapt their behavior to their external environment in a process often facilitated by learning. Efforts to describe learning empirically can be complemented by quantitative theories that map changes in neurophysiology to changes in behavior. In this review we highlight recent advances in network science that offer a sets of tools and a general perspective that may be particularly useful in understanding types of learning that are supported by distributed neural circuits. We describe recent applications of these tools to neuroimaging data that provide unique insights into adaptive neural processes, the attainment of knowledge, and the acquisition of new skills, forming a network neuroscience of human learning. While promising, the tools have yet to be linked to the well-formulated models of behavior that are commonly utilized in cognitive psychology. We argue that continued progress will require the explicit marriage of network approaches to neuroimaging data and quantitative models of behavior. PMID:28259554
Bassett, Danielle S; Mattar, Marcelo G
2017-04-01
Humans adapt their behavior to their external environment in a process often facilitated by learning. Efforts to describe learning empirically can be complemented by quantitative theories that map changes in neurophysiology to changes in behavior. In this review we highlight recent advances in network science that offer a sets of tools and a general perspective that may be particularly useful in understanding types of learning that are supported by distributed neural circuits. We describe recent applications of these tools to neuroimaging data that provide unique insights into adaptive neural processes, the attainment of knowledge, and the acquisition of new skills, forming a network neuroscience of human learning. While promising, the tools have yet to be linked to the well-formulated models of behavior that are commonly utilized in cognitive psychology. We argue that continued progress will require the explicit marriage of network approaches to neuroimaging data and quantitative models of behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.
Movahedi, Faezeh; Coyle, James L; Sejdic, Ervin
2018-05-01
Deep learning, a relatively new branch of machine learning, has been investigated for use in a variety of biomedical applications. Deep learning algorithms have been used to analyze different physiological signals and gain a better understanding of human physiology for automated diagnosis of abnormal conditions. In this paper, we provide an overview of deep learning approaches with a focus on deep belief networks in electroencephalography applications. We investigate the state-of-the-art algorithms for deep belief networks and then cover the application of these algorithms and their performances in electroencephalographic applications. We covered various applications of electroencephalography in medicine, including emotion recognition, sleep stage classification, and seizure detection, in order to understand how deep learning algorithms could be modified to better suit the tasks desired. This review is intended to provide researchers with a broad overview of the currently existing deep belief network methodology for electroencephalography signals, as well as to highlight potential challenges for future research.
Attentional Bias in Human Category Learning: The Case of Deep Learning.
Hanson, Catherine; Caglar, Leyla Roskan; Hanson, Stephen José
2018-01-01
Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This "failure" to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning.
Formation of Community-Based Hypertension Practice Networks: Success, Obstacles, and Lessons Learned
Dart, Richard A.; Egan, Brent M.
2014-01-01
Community-based practice networks for research and improving the quality of care are growing in size and number but have variable success rates. In this paper we review recent efforts to initiate a community-based hypertension network modeled after the successful Outpatient Quality Improvement Network (O’QUIN) project, located at the Medical University of South Carolina. We highlight key lessons learned and new directions to be explored. PMID:24666425
Discriminative Cooperative Networks for Detecting Phase Transitions
NASA Astrophysics Data System (ADS)
Liu, Ye-Hua; van Nieuwenburg, Evert P. L.
2018-04-01
The classification of states of matter and their corresponding phase transitions is a special kind of machine-learning task, where physical data allow for the analysis of new algorithms, which have not been considered in the general computer-science setting so far. Here we introduce an unsupervised machine-learning scheme for detecting phase transitions with a pair of discriminative cooperative networks (DCNs). In this scheme, a guesser network and a learner network cooperate to detect phase transitions from fully unlabeled data. The new scheme is efficient enough for dealing with phase diagrams in two-dimensional parameter spaces, where we can utilize an active contour model—the snake—from computer vision to host the two networks. The snake, with a DCN "brain," moves and learns actively in the parameter space, and locates phase boundaries automatically.
Game-theoretic cooperativity in networks of self-interested units
NASA Astrophysics Data System (ADS)
Barto, Andrew G.
1986-08-01
The behavior of theoretical neural networks is often described in terms of competition and cooperation. I present an approach to network learning that is related to game and team problems in which competition and cooperation have more technical meanings. I briefly describe the application of stochastic learning automata to game and team problems and then present an adaptive element that is a synthesis of aspects of stochastic learning automata and typical neuron-like adaptive elements. These elements act as self-interested agents that work toward improving their performance with respect to their individual preference orderings. Networks of these elements can solve a variety of team decision problems, some of which take the form of layered networks in which the ``hidden units'' become appropriate functional components as they attempt to improve their own payoffs.
Fuzzy Logic Based Anomaly Detection for Embedded Network Security Cyber Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ondrej Linda; Todd Vollmer; Jason Wright
Resiliency and security in critical infrastructure control systems in the modern world of cyber terrorism constitute a relevant concern. Developing a network security system specifically tailored to the requirements of such critical assets is of a primary importance. This paper proposes a novel learning algorithm for anomaly based network security cyber sensor together with its hardware implementation. The presented learning algorithm constructs a fuzzy logic rule based model of normal network behavior. Individual fuzzy rules are extracted directly from the stream of incoming packets using an online clustering algorithm. This learning algorithm was specifically developed to comply with the constrainedmore » computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental test-bed mimicking the environment of a critical infrastructure control system.« less
Lifelong learning of human actions with deep neural network self-organization.
Parisi, German I; Tani, Jun; Weber, Cornelius; Wermter, Stefan
2017-12-01
Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas E; Schuman, Catherine D; Young, Steven R
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less