26년 2월 2주차 그래프 오마카세
Graph Talks LLM,GNN integration 행사 참여 신청 링크 안녕하세요, GUG 정이태입니다. 1.입춘이 다가오고 있지만 여전히 바람이 매섭네요. 다행히 지난 주말은 바람이 잦아들어 잠시나마 숨을 돌릴 수 있는 휴일이었는데, 다들 건강하게 잘 지내고 계신가요? 2.요즘 AI 업계는 NVIDIA GTC 2025에서 화두가 되었던 GraphRAG(Softprompt, KGE)를 넘어, 다가올
Graph Talks LLM,GNN integration 행사 참여 신청 링크 안녕하세요, GUG 정이태입니다. 1.입춘이 다가오고 있지만 여전히 바람이 매섭네요. 다행히 지난 주말은 바람이 잦아들어 잠시나마 숨을 돌릴 수 있는 휴일이었는데, 다들 건강하게 잘 지내고 계신가요? 2.요즘 AI 업계는 NVIDIA GTC 2025에서 화두가 되었던 GraphRAG(Softprompt, KGE)를 넘어, 다가올
Learning Topology and Physical Laws Beyond Nodes and Edges : E(n)-Equivariant Topological Neural Networks E(n) Equivariant Topological Neural NetworksGraph neural networks excel at modeling pairwise interactions, but they cannot flexibly accommodate higher-order interactions and features. Topological deep learning (TDL) has emerged recently as a promising tool for addressing
The Synergy Created By MCP, and Knowledge Graph of Agentic AI Memgraph blogStay up-to-date with the latest trends and insights in graph database technology with Memgraph’s blog.GraphRAG & Knowledge Graphs: Making Your Data AI-Ready for 2026 - FlureeDiscover why 78% of companies aren’t AI-ready and how GraphRAG
GUG Interview 안녕하세요, 정이태입니다. GUG talks 소식을 전해드립니다. 이번 세션은 Neo4j 솔루션 엔지니어 Bryan Lee 함께 점심시간을 활용해 온라인으로 진행합니다. 가볍게 점심을 준비해 편안한 마음으로 참여해 주시면 좋을 것 같습니다. Bryan disucssion topic 최근 지식 그래프(Knowledge Graph) 구축을 위한 NER(Named Entity Recognition) 태스크에서 다양한 연구와 시행착오를 겪은 Bryan의
mHC-GNN: Manifold-Constrained Hyper-Connections for Graph Neural Networks mHC-GNN: Manifold-Constrained Hyper-Connections for Graph Neural NetworksGraph Neural Networks (GNNs) suffer from over-smoothing in deep architectures and expressiveness bounded by the 1-Weisfeiler-Leman (1-WL) test. We adapt Manifold-Constrained Hyper-Connections (\mhc)~\citep{xie2025mhc}, recently proposed for Transformers, to graph neural networks. Our method, mHC-GNN, expands
Beyond Context Graphs: Why 2026 Must Be the Year of Agentic Memory, Causality, and Explainability https://medium.com/@volodymyrpavlyshyn/beyond-context-graphs-why-2026-must-be-the-year-of-agentic-memory-causality-and-explainability-db43632dbdee * 안녕하세요, 구독자 여러분. 희망찬 2026년 새해가 밝았습니다. 새해 복 많이 받으시고, 좋은 일들로 가득한 한 해가 되시기를 바랍니다. * 2026년의 첫 오마카세로 어떤 것이 좋을지 여러 아티클들을 찾아보고 읽어보다가, "요즘
Signals with shape: why topology matters for modern data? News article: Signals with shape: why topology matters for modern data?SURE-AI * 어느덧 2025년 을사년의 마지막 그래프 오마카세로 인사드리게 되었습니다. 구독자 여러분들의 올 해는 어떠셨을까요? 각자의 현장에서 혁신과 큰 발전을 이끌어오셨을 구독자 여러분들께 안부를 전합니다. 지금 한국은 엄청난 한파라고 들었습니다만 모두
Comparing RAG and GraphRAG for Page-Level Retrieval Question Answering on Math Textbook Comparing RAG and GraphRAG for Page-Level Retrieval Question Answering on Math TextbookTechnology-enhanced learning environments often help students retrieve relevant learning content for questions arising during self-paced study. Large language models (LLMs) have emerged as novel aids for information
Learning to Retrieve and Reason on Knowledge Graph through Active Self-Reflection Learning to Retrieve and Reason on Knowledge Graph through Active Self-ReflectionExtensive research has investigated the integration of large language models (LLMs) with knowledge graphs to enhance the reasoning process. However, understanding how models perform reasoning utilizing structured graph knowledge
In-depth Analysis of Graph-based RAG in a Unified Framework In-depth Analysis of Graph-based RAG in a Unified FrameworkGraph-based Retrieval-Augmented Generation (RAG) has proven effective in integrating external knowledge into large language models (LLMs), improving their factual accuracy, adaptability, interpretability, and trustworthiness. A number of graph-based RAG methods have been proposed
Why Ontologies are Key for Data Governance in the LLM Era? Reference Blog : https://medium.com/timbr-ai * 최근 LLM 도입을 검토하는 많은 기업 사이에서 온톨로지(Ontology)가 뜨거운 화두로 떠오르고 있습니다. 다들 잘 아시다시피, 온톨로지란 쉽게 말해 데이터 간의 관계와 의미를 정의하는 지도로 받아들일 수 있습니다. 단순히 데이터를 저장하는 것을
NodeRAG: Structuring Graph-based RAG with Heterogeneous Nodes NodeRAG: Structuring Graph-based RAG with Heterogeneous NodesRetrieval-augmented generation (RAG) empowers large language models to access external and private corpus, enabling factually consistent responses in specific domains. By exploiting the inherent structure of the corpus, graph-based RAG methods further enrich this process by building