GUG 소식
안녕하세요 GUG 정이태입니다. 2026 1분기가 지나간지가 엊그제 같은데 벌써 2분기의 시작인 4월이 끝나가고 있네요. 연초에 세우셨던 계획들은 잘 수행하고 계시는지요. GUG 는 여러분들의 성원 덕분에 꾸준히 번창하고 있습니다. 지금 글을 작성하고 있는 26년 4월 27일 기준으로 857명의 구독자분들이 함께 해주고 계십니다. 늘 감사드립니다. 요새 바쁘다는 핑계로 GUG 신경을 못쓰고 있었는데,
안녕하세요 GUG 정이태입니다. 2026 1분기가 지나간지가 엊그제 같은데 벌써 2분기의 시작인 4월이 끝나가고 있네요. 연초에 세우셨던 계획들은 잘 수행하고 계시는지요. GUG 는 여러분들의 성원 덕분에 꾸준히 번창하고 있습니다. 지금 글을 작성하고 있는 26년 4월 27일 기준으로 857명의 구독자분들이 함께 해주고 계십니다. 늘 감사드립니다. 요새 바쁘다는 핑계로 GUG 신경을 못쓰고 있었는데,
Learning Laplacian Forms for Graph Signal Processing via the Deformed Laplacian Learning Laplacian Forms for Graph Signal Processing via the Deformed LaplacianLearning the graph Laplacian from observed data is one of the most investigated and fundamental tasks in Graph Signal Processing (GSP). Different variants of the Laplacian, such as the
GraphFrames, a major graph analysis library update New GraphFrames release: Improved performance, new algorithms, and documentation | Sem Sinchenko posted on the topic | LinkedInOn behalf of the GraphFrames maintainers, I am happy to announce the delivery of a new release. It is a significant improvement! It improves performance and memory management:
PolyGraph Discrepancy: a classifier-based metric for graph generation PolyGraph Discrepancy: a classifier-based metric for graph generationExisting methods for evaluating graph generative models primarily rely on Maximum Mean Discrepancy (MMD) metrics based on graph descriptors. While these metrics can rank generative models, they do not provide an absolute measure of performance.
Graphs are maximally expressive for higher-order interactions Graphs are maximally expressive for higher-order interactionsWe demonstrate that graph-based models are fully capable of representing higher-order interactions, and have a long history of being used for precisely this purpose. This stands in contrast to a common claim in the recent literature on
A Case for Hypergraphs to Model and Map SNNs on Neuromorphic Hardware paper link : https://arxiv.org/abs/2601.16118 Keywords * Spiking Neural Network (SNN) * Neuromorphic Hardware * Hypergraph * Graph partitioning * LLM 기반 거대 AI 인프라가 수천 대의 서버로 구성된 대규모 데이터센터를 필요로 하는 것과 대조적으로, 인간의 뇌는 놀라울 정도로 높은
Position: Message-passing and spectral GNNs are two sides of the same coin paper link : https://arxiv.org/abs/2602.10031 Keywords * MPNN, Spectral GNN * Graph Shift Operator * Spectral PE * Roadmap and Vision * 새로운 GNN 프로젝트를 위한 적절한 아키텍처들을 구축하고자 할 때 자연스럽게 두 가지 표준적인 접근 방식 사이에서 고민하게 됩니다.
RAG-ANYTHING: ALL-IN-ONE RAG FRAMEWORK RAG-Anything: All-in-One RAG FrameworkRetrieval-Augmented Generation (RAG) has emerged as a fundamental paradigm for expanding Large Language Models beyond their static training limitations. However, a critical misalignment exists between current RAG capabilities and real-world information environments. Modern knowledge repositories are inherently multimodal, containing rich combinations of textual
Graph Talks LLM,GNN integration 행사 참여 신청 링크 안녕하세요, GUG 정이태입니다. 1.입춘이 다가오고 있지만 여전히 바람이 매섭네요. 다행히 지난 주말은 바람이 잦아들어 잠시나마 숨을 돌릴 수 있는 휴일이었는데, 다들 건강하게 잘 지내고 계신가요? 2.요즘 AI 업계는 NVIDIA GTC 2025에서 화두가 되었던 GraphRAG(Softprompt, KGE)를 넘어, 다가올
Learning Topology and Physical Laws Beyond Nodes and Edges : E(n)-Equivariant Topological Neural Networks E(n) Equivariant Topological Neural NetworksGraph neural networks excel at modeling pairwise interactions, but they cannot flexibly accommodate higher-order interactions and features. Topological deep learning (TDL) has emerged recently as a promising tool for addressing
The Synergy Created By MCP, and Knowledge Graph of Agentic AI Memgraph blogStay up-to-date with the latest trends and insights in graph database technology with Memgraph’s blog.GraphRAG & Knowledge Graphs: Making Your Data AI-Ready for 2026 - FlureeDiscover why 78% of companies aren’t AI-ready and how GraphRAG
GUG Interview 안녕하세요, 정이태입니다. GUG talks 소식을 전해드립니다. 이번 세션은 Neo4j 솔루션 엔지니어 Bryan Lee 함께 점심시간을 활용해 온라인으로 진행합니다. 가볍게 점심을 준비해 편안한 마음으로 참여해 주시면 좋을 것 같습니다. Bryan disucssion topic 최근 지식 그래프(Knowledge Graph) 구축을 위한 NER(Named Entity Recognition) 태스크에서 다양한 연구와 시행착오를 겪은 Bryan의
GraphOmakase
mHC-GNN: Manifold-Constrained Hyper-Connections for Graph Neural Networks mHC-GNN: Manifold-Constrained Hyper-Connections for Graph Neural NetworksGraph Neural Networks (GNNs) suffer from over-smoothing in deep architectures and expressiveness bounded by the 1-Weisfeiler-Leman (1-WL) test. We adapt Manifold-Constrained Hyper-Connections (\mhc)~\citep{xie2025mhc}, recently proposed for Transformers, to graph neural networks. Our method, mHC-GNN, expands
GraphOmakase
Beyond Context Graphs: Why 2026 Must Be the Year of Agentic Memory, Causality, and Explainability https://medium.com/@volodymyrpavlyshyn/beyond-context-graphs-why-2026-must-be-the-year-of-agentic-memory-causality-and-explainability-db43632dbdee * 안녕하세요, 구독자 여러분. 희망찬 2026년 새해가 밝았습니다. 새해 복 많이 받으시고, 좋은 일들로 가득한 한 해가 되시기를 바랍니다. * 2026년의 첫 오마카세로 어떤 것이 좋을지 여러 아티클들을 찾아보고 읽어보다가, "요즘
GraphOmakase
Signals with shape: why topology matters for modern data? News article: Signals with shape: why topology matters for modern data?SURE-AI * 어느덧 2025년 을사년의 마지막 그래프 오마카세로 인사드리게 되었습니다. 구독자 여러분들의 올 해는 어떠셨을까요? 각자의 현장에서 혁신과 큰 발전을 이끌어오셨을 구독자 여러분들께 안부를 전합니다. 지금 한국은 엄청난 한파라고 들었습니다만 모두
Comparing RAG and GraphRAG for Page-Level Retrieval Question Answering on Math Textbook Comparing RAG and GraphRAG for Page-Level Retrieval Question Answering on Math TextbookTechnology-enhanced learning environments often help students retrieve relevant learning content for questions arising during self-paced study. Large language models (LLMs) have emerged as novel aids for information
GraphOmakase
Learning to Retrieve and Reason on Knowledge Graph through Active Self-Reflection Learning to Retrieve and Reason on Knowledge Graph through Active Self-ReflectionExtensive research has investigated the integration of large language models (LLMs) with knowledge graphs to enhance the reasoning process. However, understanding how models perform reasoning utilizing structured graph knowledge
GraphOmakase
In-depth Analysis of Graph-based RAG in a Unified Framework In-depth Analysis of Graph-based RAG in a Unified FrameworkGraph-based Retrieval-Augmented Generation (RAG) has proven effective in integrating external knowledge into large language models (LLMs), improving their factual accuracy, adaptability, interpretability, and trustworthiness. A number of graph-based RAG methods have been proposed
GraphOmakase
Why Ontologies are Key for Data Governance in the LLM Era? Reference Blog : https://medium.com/timbr-ai * 최근 LLM 도입을 검토하는 많은 기업 사이에서 온톨로지(Ontology)가 뜨거운 화두로 떠오르고 있습니다. 다들 잘 아시다시피, 온톨로지란 쉽게 말해 데이터 간의 관계와 의미를 정의하는 지도로 받아들일 수 있습니다. 단순히 데이터를 저장하는 것을
GraphOmakase
NodeRAG: Structuring Graph-based RAG with Heterogeneous Nodes NodeRAG: Structuring Graph-based RAG with Heterogeneous NodesRetrieval-augmented generation (RAG) empowers large language models to access external and private corpus, enabling factually consistent responses in specific domains. By exploiting the inherent structure of the corpus, graph-based RAG methods further enrich this process by building
GraphOmakase
When to use Graphs in RAG: A Comprehensive Analysis for Graph Retrieval-Augmented Generation When to use Graphs in RAG: A Comprehensive Analysis for Graph Retrieval-Augmented GenerationGraph retrieval-augmented generation (GraphRAG) has emerged as a powerful paradigm for enhancing large language models (LLMs) with external knowledge. It leverages graphs to model the
GraphOmakase
GraphFrames: Architectural Evolution from GraphX for Big Data and AI Applications GraphFrames: an integrated API for mixing graph and relational queriesGraph data is prevalent in many domains, but it has usually required specialized engines to analyze. This design is onerous for users and precludes optimization across complete workflows. We…OpenReview.
GraphOmakase
Verifying Chain-of-Thought Reasoning via Its Computational Graph Verifying Chain-of-Thought Reasoning via Its Computational GraphCurrent Chain-of-Thought (CoT) verification methods predict reasoning correctness based on outputs (black-box) or activations (gray-box), but offer limited insight into why a computation fails. We introduce a white-box method: Circuit-based Reasoning Verification (CRV). We hypothesize that attribution
GraphOmakase
Introducing GraphQA: An Agent for Asking Graphs Questions Introducing GraphQA: An Agent for Asking Graphs Questions | CatioGraphQA is Catio’s new open-source agent for natural-language questions over architecture graphs, fusing LLMs with graph algorithms to deliver fast, structure-aware answers for dependencies, flows, and system reasoning.catio-logo-blueIman MakaremiGitHub - catio-tech/graphqa: