Home Artificial Intelligence Demystifying Graph Neural Networks

Demystifying Graph Neural Networks

0
Demystifying Graph Neural Networks

Uncovering the ability and applications of a rising deep learning algorithm

Towards Data Science
Neuron that illustrates nodes and vertices of graph
Photo by Moritz Kindler on Unsplash

Through this text, I aim to introduce you to a growingly popular deep learning algorithm, Graph Neural Networks (GNNs). GNNs are step by step emerging from the realm of research and are already demonstrating impressive results on real-world problems, suggesting their vast potential. The foremost objective of this text is to demystify this algorithm. If, by the top, you may answer questions like, Why would I take advantage of a GNN? How does a GNN work? I might consider my mission completed.

Before delving into the topic, it’s essential to recall two concepts intrinsically related to our topic:

Graphs and Embeddings

Graphs in Computer Science

Let’s start with a fast reminder of what a graph is. Graphs are utilized in countless domains. Particularly in computer science, a graph is an information structure composed of two elements: a set of vertices, or nodes, and a set of edges connecting these nodes.

A graph will be directed or undirected. A directed graph is a graph through which edges have a direction, as shown below.

So, a graph is a representation of relationships (edges) between objects (nodes).

Embeddings

Embeddings are a solution to represent information. Let me explain with an example before discussing it more formally. Suppose I actually have a set of 10,000 objects to learn about. The “natural” representation of those objects is the discrete representation, which is a vector with as many components as elements within the set. So, within the image, the discrete representation is the one on the suitable, where only one in all the vector components is 1 (black) and the remaining are 0.

This representation clearly poses a dimensionality problem. That is where embeddings come into play. They reduce the dimensionality of the issue by representing data in a much lower-dimensional space. The representation is continuous, meaning the vector components’ values are different from 0 and 1. Nonetheless, determining what each component represents on this recent space just isn’t straightforward, as is the case with discrete representation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here