PyTorch Geometric Signed Directed Models

Directed Undirected Network Models and Layers

class MagNet_node_classification(num_features: int, hidden: int = 2, q: float = 0.25, K: int = 2, label_dim: int = 2, activation: bool = False, trainable_q: bool = False, layer: int = 2, dropout: float = False, normalization: str = 'sym', cached: bool = False)[source]

The MagNet model for node classification from the MagNet: A Neural Network for Directed Graphs. paper.

Parameters
  • num_features (int) – Size of each input sample.

  • hidden (int, optional) – Number of hidden channels. Default: 2.

  • K (int, optional) – Order of the Chebyshev polynomial. Default: 2.

  • q (float, optional) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.

  • label_dim (int, optional) – Number of output classes. Default: 2.

  • activation (bool, optional) – whether to use activation function or not. (default: False)

  • trainable_q (bool, optional) – whether to set q to be trainable or not. (default: False)

  • layer (int, optional) – Number of MagNetConv layers. Deafult: 2.

  • dropout (float, optional) – Dropout value. (default: False)

  • normalization (str, optional) – The normalization scheme for the magnetic Laplacian (default: sym): 1. None: No normalization \(\mathbf{L} = \mathbf{D} - \mathbf{A} \odot \exp(i \Theta^{(q)})\) 2. "sym": Symmetric normalization \(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2} \odot \exp(i \Theta^{(q)})\) odot denotes the element-wise multiplication.

  • cached (bool, optional) – If set to True, the layer will cache the __norm__ matrix on first execution, and will use the cached version for further executions. This parameter should only be set to True in transductive learning scenarios. (default: False)

forward(real: torch.FloatTensor, imag: torch.FloatTensor, edge_index: torch.LongTensor, edge_weight: Optional[torch.LongTensor] = None)torch.FloatTensor[source]

Making a forward pass of the MagNet node classification model.

Arg types:
  • real, imag (PyTorch Float Tensor) - Node features.

  • edge_index (PyTorch Long Tensor) - Edge indices.

  • edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • log_prob (PyTorch Float Tensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).

class DiGCN_node_classification(num_features: int, hidden: int, label_dim: int, dropout: float = 0.5)[source]

An implementation of the DiGCN model without inception blocks for node classification from the Digraph Inception Convolutional Networks paper.

Parameters
  • num_features (int) – Dimension of input features.

  • hidden (int) – Hidden dimension.

  • label_dim (int) – Number of clusters.

  • dropout (float) – Dropout value. (Default: 0.5)

forward(x: torch.FloatTensor, edge_index: torch.LongTensor, edge_weight: Optional[torch.FloatTensor] = None)torch.FloatTensor[source]

Making a forward pass of the DiGCN node classification model without inception blocks.

Arg types:
  • x (PyTorch FloatTensor) - Node features.

  • edge_index (PyTorch LongTensor) - Edge indices.

  • edge_weight (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).

class DiGCN_Inception_Block_node_classification(num_features: int, hidden: int, label_dim: int, dropout: float = 0.5)[source]

An implementation of the DiGCN model with inception blocks for node classification from the Digraph Inception Convolutional Networks paper.

Parameters
  • num_features (int) – Dimention of input features.

  • hidden (int) – Hidden dimention.

  • label_dim (int) – Number of clusters.

  • dropout (float) – Dropout value.

forward(features: torch.FloatTensor, edge_index_tuple: Tuple[torch.LongTensor, torch.LongTensor], edge_weight_tuple: Tuple[torch.FloatTensor, torch.FloatTensor])torch.FloatTensor[source]

Making a forward pass of the DiGCN node classification model.

Arg types:
  • x (PyTorch FloatTensor) - Node features.

  • edge_index_tuple (PyTorch LongTensor) - Tuple of edge indices.

  • edge_weight_tuple (PyTorch FloatTensor, optional) - Tuple of edge weights corresponding to edge indices.

Return types:
  • x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).

class DIGRAC_node_clustering(num_features: int, hidden: int, nclass: int, fill_value: float, dropout: float, hop: int)[source]

The directed graph clustering model from the DIGRAC: Digraph Clustering Based on Flow Imbalance paper.

Parameters
  • num_features (int) – Number of features.

  • hidden (int) – Hidden dimensions of the initial MLP.

  • nclass (int) – Number of clusters.

  • dropout (float) – Dropout probability.

  • hop (int) – Number of hops to consider.

  • fill_value (float) – Value for added self-loops.

forward(edge_index: torch.FloatTensor, edge_weight: torch.FloatTensor, features: torch.FloatTensor)Tuple[torch.FloatTensor, torch.FloatTensor, torch.LongTensor, torch.FloatTensor][source]

Making a forward pass of the DIGRAC node clustering model.

Arg types:
  • edge_index (PyTorch FloatTensor) - Edge indices.

  • edge_weight (PyTorch FloatTensor) - Edge weights.

  • features (PyTorch FloatTensor) - Input node features, with shape (num_nodes, num_features).

Return types:
  • z (PyTorch FloatTensor) - Embedding matrix, with shape (num_nodes, 2*hidden).

  • output (PyTorch FloatTensor) - Log of prob, with shape (num_nodes, num_clusters).

  • predictions_cluster (PyTorch LongTensor) - Predicted labels.

  • prob (PyTorch FloatTensor) - Probability assignment matrix of different clusters, with shape (num_nodes, num_clusters).

class DGCN_node_classification(num_features: int, hidden: int, label_dim: int, dropout: Optional[float] = 0.5, improved: bool = False, cached: bool = False)[source]

An implementation of the DGCN node classification model from Directed Graph Convolutional Network paper.

Parameters
  • num_features (int) – Dimention of input features.

  • hidden (int) – Hidden dimention.

  • label_dim (int) – Output dimension.

  • dropout (float, optional) – Dropout value. Default: None.

  • improved (bool, optional) – If set to True, the layer computes \(\mathbf{\hat{A}}\) as \(\mathbf{A} + 2\mathbf{I}\). (default: False)

  • cached (bool, optional) – If set to True, the layer will cache the computation of \(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}\) on first execution, and will use the cached version for further executions. This parameter should only be set to True in transductive learning scenarios. (default: False)

forward(x: torch.FloatTensor, edge_index: torch.LongTensor, edge_in: torch.LongTensor, edge_out: torch.LongTensor, in_w: Optional[torch.FloatTensor] = None, out_w: Optional[torch.FloatTensor] = None)torch.FloatTensor[source]

Making a forward pass of the DGCN node classification model.

Arg types:
  • x (PyTorch FloatTensor) - Node features.

  • edge_index (PyTorch LongTensor) - Edge indices.

  • edge_in, edge_out (PyTorch LongTensor) - Edge indices for input and output directions, respectively.

  • in_w, out_w (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).

class DiGCL(in_channels: int, activation: str, num_hidden: int, num_proj_hidden: int, tau: float, num_layers: int)[source]

An implementation of the DiGCL model from the Directed Graph Contrastive Learning paper.

Parameters
  • in_channels (int) – Dimension of input features.

  • activation (str) – Activation funciton to use.

  • num_hidden (int) – Hidden dimension.

  • num_proj_hidden (int) – Hidden dimension for projection.

  • tau (float) – Tau value in the loss.

  • num_layers (int) – Number of layers for encoder.

batched_semi_loss(z1: torch.Tensor, z2: torch.Tensor, batch_size: int)[source]

Semi-supervised loss function. Space complexity: O(BN) (semi_loss: O(N^2))

Args types::
  • z1 (PyTorch FloatTensor) - Node features.

  • z2 (PyTorch FloatTensor) - Node features.

Return types:
  • loss (PyTorch FloatTensor) - Loss.

forward(x: torch.Tensor, edge_index: torch.Tensor, edge_weight: Optional[torch.Tensor] = None)torch.Tensor[source]

Making a forward pass of the DiGCL model.

Arg types:
  • x (PyTorch FloatTensor) - Node features.

  • edge_index (PyTorch LongTensor) - Edge indices.

  • edge_weight (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • x (PyTorch FloatTensor) - Embeddings for all nodes, with shape (num_nodes, out_channels).

loss(z1: torch.Tensor, z2: torch.Tensor, mean: bool = True, batch_size: int = 0)[source]

The DiGCL contrastive loss.

Arg types:
  • z1, z2 (PyTorch FloatTensor) - Node hidden representations.

  • mean (bool, optional) - Whether to return the mean of loss values, default True, otherwise return sum.

  • batch_size (int, optional) - Batch size, if 0 this means full-batch. Default 0.

Return types:
  • ret (PyTorch FloatTensor) - Loss.

projection(z: torch.Tensor)torch.Tensor[source]

Nonlinear transformation of the input hidden feature.

Args types::
  • z (PyTorch FloatTensor) - Node features.

Return types:
  • z (PyTorch FloatTensor) - Projected node features.

semi_loss(z1: torch.Tensor, z2: torch.Tensor)[source]

Semi-supervised loss function.

Arg types:
  • z1 (PyTorch FloatTensor) - Node features.

  • z2 (PyTorch FloatTensor) - Node features.

Return types:
  • loss (PyTorch FloatTensor) - Loss.

sim(z1: torch.Tensor, z2: torch.Tensor)[source]

Normalized similarity calculation.

Args types::
  • z1 (PyTorch FloatTensor) - Node features.

  • z2 (PyTorch FloatTensor) - Node features.

Return types:
  • z (PyTorch FloatTensor) - Node-wise similarity.

The MagNet model for link prediction from the MagNet: A Neural Network for Directed Graphs. paper.

Parameters
  • num_features (int) – Size of each input sample.

  • hidden (int, optional) – Number of hidden channels. Default: 2.

  • K (int, optional) – Order of the Chebyshev polynomial. Default: 2.

  • q (float, optional) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.

  • label_dim (int, optional) – Number of output classes. Default: 2.

  • activation (bool, optional) – whether to use activation function or not. (default: True)

  • trainable_q (bool, optional) – whether to set q to be trainable or not. (default: False)

  • layer (int, optional) – Number of MagNetConv layers. Deafult: 2.

  • dropout (float, optional) – Dropout value. (default: 0.5)

  • normalization (str, optional) – The normalization scheme for the magnetic Laplacian (default: sym): 1. None: No normalization \(\mathbf{L} = \mathbf{D} - \mathbf{A} Hadamard \exp(i \Theta^{(q)})\) 2. "sym": Symmetric normalization \(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2} Hadamard \exp(i \Theta^{(q)})\)

  • cached (bool, optional) – If set to True, the layer will cache the __norm__ matrix on first execution, and will use the cached version for further executions. This parameter should only be set to True in transductive learning scenarios. (default: False)

Making a forward pass of the MagNet node classification model.

Arg types:
  • real, imag (PyTorch Float Tensor) - Node features.

  • edge_index (PyTorch Long Tensor) - Edge indices.

  • query_edges (PyTorch Long Tensor) - Edge indices for querying labels.

  • edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • log_prob (PyTorch Float Tensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).

An implementation of the DiGCN model without inception blocks for link prediction from the Digraph Inception Convolutional Networks paper.

Parameters
  • num_features (int) – Dimension of input features.

  • hidden (int) – Hidden dimension.

  • label_dim (int) – The dimension of labels.

  • dropout (float) – Dropout value. (Default: 0.5)

Making a forward pass of the DiGCN node classification model without inception blocks.

Arg types:
  • x (PyTorch FloatTensor) - Node features.

  • edge_index (PyTorch LongTensor) - Edge indices.

  • edge_weight (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • query_edges (PyTorch Long Tensor) - Edge indices for querying labels.

  • x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).

An implementation of the DiGCN model with inception blocks for link prediction from the Digraph Inception Convolutional Networks paper.

Parameters
  • num_features (int) – Dimention of input features.

  • hidden (int) – Hidden dimention.

  • num_clusters (int) – Number of clusters.

  • dropout (float) – Dropout value.

Making a forward pass of the DiGCN node classification model.

Arg types:
  • x (PyTorch FloatTensor) - Node features.

  • edge_index_tuple (PyTorch LongTensor) - Tuple of edge indices.

  • query_edges (PyTorch Long Tensor) - Edge indices for querying labels.

  • edge_weight_tuple (PyTorch FloatTensor, optional) - Tuple of edge weights corresponding to edge indices.

Return types:
  • x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).

An implementation of the DGCN link prediction model from Directed Graph Convolutional Network paper.

Parameters
  • input_dim (int) – Dimention of input features.

  • filter_num (int) – Hidden dimention.

  • label_dim (int) – Output dimension.

  • dropout (float, optional) – Dropout value. Default: None.

  • improved (bool, optional) – If set to True, the layer computes \(\mathbf{\hat{A}}\) as \(\mathbf{A} + 2\mathbf{I}\). (default: False)

  • cached (bool, optional) – If set to True, the layer will cache the computation of \(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}\) on first execution, and will use the cached version for further executions. This parameter should only be set to True in transductive learning scenarios. (default: False)

Making a forward pass of the DGCN node classification model.

Arg types:
  • x (PyTorch FloatTensor) - Node features.

  • edge_index (PyTorch LongTensor) - Edge indices.

  • edge_in, edge_out (PyTorch LongTensor) - Edge indices for input and output directions, respectively.

  • in_w, out_w (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).

class MagNetConv(in_channels: int, out_channels: int, K: int, q: float, trainable_q: bool, normalization: str = 'sym', cached: bool = False, bias: bool = True, **kwargs)[source]

The magnetic graph convolutional operator from the MagNet: A Neural Network for Directed Graphs. paper \(\mathbf{\hat{L}}\) denotes the scaled and normalized magnetic Laplacian \(\frac{2\mathbf{L}}{\lambda_{\max}} - \mathbf{I}\).

Parameters
  • in_channels (int) – Size of each input sample.

  • out_channels (int) – Size of each output sample.

  • K (int) – Chebyshev filter size \(K\).

  • q (float, optional) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.

  • trainable_q (bool, optional) – whether to set q to be trainable or not. (default: False)

  • normalization (str, optional) – The normalization scheme for the magnetic Laplacian (default: sym): 1. None: No normalization \(\mathbf{L} = \mathbf{D} - \mathbf{A} \odot \exp(i \Theta^{(q)})\) 2. "sym": Symmetric normalization \(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2} \odot \exp(i \Theta^{(q)})\) odot denotes the element-wise multiplication.

  • cached (bool, optional) – If set to True, the layer will cache the __norm__ matrix on first execution, and will use the cached version for further executions. This parameter should only be set to True in transductive learning scenarios. (default: False)

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

forward(x_real: torch.FloatTensor, x_imag: torch.FloatTensor, edge_index: torch.LongTensor, edge_weight: Optional[torch.Tensor] = None, lambda_max: Optional[torch.Tensor] = None)torch.FloatTensor[source]

Making a forward pass of the MagNet Convolution layer.

Arg types:
  • x_real, x_imag (PyTorch Float Tensor) - Node features.

  • edge_index (PyTorch Long Tensor) - Edge indices.

  • edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.

  • lambda_max (optional, but mandatory if normalization is None) - Largest eigenvalue of Laplacian.

Return types:
  • out_real, out_imag (PyTorch Float Tensor) - Hidden state tensor for all nodes, with shape (N_nodes, F_out).

message(x_j, norm)[source]

Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in edge_index. This function can take any argument as input which was initially passed to propagate(). Furthermore, tensors passed to propagate() can be mapped to the respective nodes \(i\) and \(j\) by appending _i or _j to the variable name, .e.g. x_i and x_j.

class DiGCNConv(in_channels: int, out_channels: int, improved: bool = False, cached: bool = True, bias: bool = True, **kwargs)[source]

The graph convolutional operator from the Digraph Inception Convolutional Networks paper. The spectral operation is the same with Kipf’s GCN. DiGCN preprocesses the adjacency matrix and does not require a norm operation during the convolution operation.

Parameters
  • in_channels (int) – Size of each input sample.

  • out_channels (int) – Size of each output sample.

  • cached (bool, optional) – If set to True, the layer will cache the adj matrix on first execution, and will use the cached version for further executions. Please note that, all the normalized adj matrices (including undirected) are calculated in the dataset preprocessing to reduce time comsume. This parameter should only be set to True in transductive learning scenarios. (default: False)

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

forward(x: torch.FloatTensor, edge_index: torch.LongTensor, edge_weight: Optional[torch.FloatTensor] = None)torch.FloatTensor[source]

Making a forward pass of the DiGCN Convolution layer.

Arg types:
  • x (PyTorch FloatTensor) - Node features.

  • edge_index (PyTorch LongTensor) - Edge indices.

  • edge_weight (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • x (PyTorch FloatTensor) - Hidden state tensor for all nodes.

message(x_j, norm)[source]

Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in edge_index. This function can take any argument as input which was initially passed to propagate(). Furthermore, tensors passed to propagate() can be mapped to the respective nodes \(i\) and \(j\) by appending _i or _j to the variable name, .e.g. x_i and x_j.

update(aggr_out)[source]

Updates node embeddings in analogy to \(\gamma_{\mathbf{\Theta}}\) for each node \(i \in \mathcal{V}\). Takes in the output of aggregation as first argument and any argument which was initially passed to propagate().

class DiGCN_InceptionBlock(in_dim: int, out_dim: int)[source]

An implementation of the inception block model from the Digraph Inception Convolutional Networks paper.

Parameters
  • in_dim (int) – Dimention of input.

  • out_dim (int) – Dimention of output.

forward(x: torch.FloatTensor, edge_index: torch.LongTensor, edge_weight: torch.FloatTensor, edge_index2: torch.LongTensor, edge_weight2: torch.FloatTensor)Tuple[torch.FloatTensor, torch.FloatTensor, torch.FloatTensor][source]

Making a forward pass of the DiGCN inception block model.

Arg types:
  • x (PyTorch FloatTensor) - Node features.

  • edge_index, edge_index2 (PyTorch LongTensor) - Edge indices.

  • edge_weight, edge_weight2 (PyTorch FloatTensor) - Edge weights corresponding to edge indices.

Return types:
  • x0, x1, x2 (PyTorch FloatTensor) - Hidden representations.

class DIMPA(hop: int, fill_value: float = 0.5)[source]

The directed mixed-path aggregation model from the DIGRAC: Digraph Clustering Based on Flow Imbalance paper.

Parameters
  • hop (int) – Number of hops to consider.

  • fill_value (float, optional) – The layer computes \(\mathbf{\hat{A}}\) as \(\mathbf{A} + fill_value*\mathbf{I}\). (default: 0.5)

forward(x_s: torch.FloatTensor, x_t: torch.FloatTensor, edge_index: torch.FloatTensor, edge_weight: torch.FloatTensor)torch.FloatTensor[source]

Making a forward pass of DIMPA.

Arg types:
  • x_s (PyTorch FloatTensor) - Souce hidden representations.

  • x_t (PyTorch FloatTensor) - Target hidden representations.

  • edge_index (PyTorch FloatTensor) - Edge indices.

  • edge_weight (PyTorch FloatTensor) - Edge weights.

Return types:
  • feat (PyTorch FloatTensor) - Embedding matrix, with shape (num_nodes, 2*input_dim).

class DGCNConv(improved: bool = False, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, **kwargs)[source]

An implementatino of the graph convolutional operator from the Directed Graph Convolutional Network paper. The same as Kipf’s GCN but remove trainable weights.

Parameters
  • improved (bool, optional) – If set to True, the layer computes \(\mathbf{\hat{A}}\) as \(\mathbf{A} + 2\mathbf{I}\). (default: False)

  • cached (bool, optional) – If set to True, the layer will cache the computation of \(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}\) on first execution, and will use the cached version for further executions. This parameter should only be set to True in transductive learning scenarios. (default: False)

  • add_self_loops (bool, optional) – If set to False, will not add self-loops to the input graph. (default: True)

  • normalize (bool, optional) – Whether to add self-loops and compute symmetric normalization coefficients on the fly. (default: True)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

forward(x: torch.Tensor, edge_index: Union[torch.Tensor, torch_sparse.tensor.SparseTensor], edge_weight: Optional[torch.Tensor] = None)torch.Tensor[source]

Making a forward pass of the graph convolutional operator.

Arg types:
  • x (PyTorch FloatTensor) - Node features.

  • edge_index (Adj) - Edge indices.

  • edge_weight (OptTensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • out (PyTorch FloatTensor) - Hidden state tensor for all nodes.

message(x_j: torch.Tensor, edge_weight: Optional[torch.Tensor])torch.Tensor[source]

Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in edge_index. This function can take any argument as input which was initially passed to propagate(). Furthermore, tensors passed to propagate() can be mapped to the respective nodes \(i\) and \(j\) by appending _i or _j to the variable name, .e.g. x_i and x_j.

message_and_aggregate(adj_t: torch_sparse.tensor.SparseTensor, x: torch.Tensor)torch.Tensor[source]

Fuses computations of message() and aggregate() into a single function. If applicable, this saves both time and memory since messages do not explicitly need to be materialized. This function will only gets called in case it is implemented and propagation takes place based on a torch_sparse.SparseTensor.

Signed (Directed) Network Models and Layers

class SSSNET_node_clustering(nfeat: int, hidden: int, nclass: int, dropout: float, hop: int, fill_value: float, directed: bool = False, bias: bool = True)[source]

The signed graph clustering model from the SSSNET: Semi-Supervised Signed Network Clustering paper.

Parameters
  • nfeat (int) – Number of features.

  • hidden (int) – Hidden dimensions of the initial MLP.

  • nclass (int) – Number of clusters.

  • dropout (float) – Dropout probability.

  • hop (int) – Number of hops to consider.

  • fill_value (float) – Value for added self-loops for the positive part of the adjacency matrix.

  • directed (bool, optional) – Whether the input network is directed or not. (default: False)

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

forward(edge_index_p: torch.LongTensor, edge_weight_p: torch.FloatTensor, edge_index_n: torch.LongTensor, edge_weight_n: torch.FloatTensor, features: torch.FloatTensor)Tuple[torch.FloatTensor, torch.FloatTensor, torch.LongTensor, torch.FloatTensor][source]

Making a forward pass of the SSSNET.

Arg types:
  • edge_index_p, edge_index_n (PyTorch FloatTensor) - Edge indices for positive and negative parts.

  • edge_weight_p, edge_weight_n (PyTorch FloatTensor) - Edge weights for positive and nagative parts.

  • features (PyTorch FloatTensor) - Input node features, with shape (num_nodes, num_features).

Return types:
  • z (PyTorch FloatTensor) - Embedding matrix, with shape (num_nodes, 2*hidden) for undirected graphs and (num_nodes, 4*hidden) for directed graphs.

  • output (PyTorch FloatTensor) - Log of prob, with shape (num_nodes, num_clusters).

  • predictions_cluster (PyTorch LongTensor) - Predicted labels.

  • prob (PyTorch FloatTensor) - Probability assignment matrix of different clusters, with shape (num_nodes, num_clusters).

The signed graph link prediction model adapted from the SSSNET: Semi-Supervised Signed Network Clustering paper.

Parameters
  • nfeat (int) – Number of features.

  • hidden (int) – Hidden dimensions of the initial MLP.

  • nclass (int) – Number of link classes.

  • dropout (float) – Dropout probability.

  • hop (int) – Number of hops to consider.

  • fill_value (float) – Value for added self-loops for the positive part of the adjacency matrix.

  • directed (bool, optional) – Whether the input network is directed or not. (default: False)

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

Making a forward pass of the SSSNET.

Arg types:
  • edge_index_p, edge_index_n (PyTorch FloatTensor) - Edge indices for positive and negative parts.

  • edge_weight_p, edge_weight_n (PyTorch FloatTensor) - Edge weights for positive and nagative parts.

  • features (PyTorch FloatTensor) - Input node features, with shape (num_nodes, num_features).

  • query_edges (PyTorch Long Tensor) - Edge indices for querying labels.

Return types:
  • log_prob (PyTorch Float Tensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).

class SIMPA(hop: int, fill_value: float, directed: bool = False)[source]

The signed mixed-path aggregation model from the SSSNET: Semi-Supervised Signed Network Clustering paper.

Parameters
  • hop (int) – Number of hops to consider.

  • fill_value (float) – Value for added self-loops for the positive part of the adjacency matrix.

  • directed (bool, optional) – Whether the input network is directed or not. (default: False)

forward(edge_index_p: torch.LongTensor, edge_weight_p: torch.FloatTensor, edge_index_n: torch.LongTensor, edge_weight_n: torch.FloatTensor, x_p: torch.FloatTensor, x_n: torch.FloatTensor, x_pt: Optional[torch.FloatTensor] = None, x_nt: Optional[torch.FloatTensor] = None)Tuple[torch.FloatTensor, torch.FloatTensor, torch.LongTensor, torch.FloatTensor][source]

Making a forward pass of SIMPA.

Arg types:
  • edge_index_p, edge_index_n (PyTorch FloatTensor) - Edge indices for positive and negative parts.

  • edge_weight_p, edge_weight_n (PyTorch FloatTensor) - Edge weights for positive and nagative parts.

  • x_p (PyTorch FloatTensor) - Souce positive hidden representations.

  • x_n (PyTorch FloatTensor) - Souce negative hidden representations.

  • x_pt (PyTorch FloatTensor, optional) - Target positive hidden representations. Default: None.

  • x_nt (PyTorch FloatTensor, optional) - Target negative hidden representations. Default: None.

Return types:
  • feat (PyTorch FloatTensor) - Embedding matrix, with shape (num_nodes, 2*input_dim) for undirected graphs and (num_nodes, 4*input_dim) for directed graphs.

class SDGNN(node_num: int, edge_index_s, in_dim: int = 20, out_dim: int = 20, layer_num: int = 2, init_emb: Optional[torch.FloatTensor] = None, init_emb_grad: bool = True, lamb_d: float = 5.0, lamb_t: float = 1.0, **kwargs)[source]

The SDGNN model from “SDGNN: Learning Node Representation for Signed Directed Networks” paper.

Parameters
  • node_num (int, optional) – The number of nodes.

  • edge_index_s (LongTensor) – The edgelist with sign. (e.g., torch.LongTensor([[0, 1, -1], [0, 2, 1]]) )

  • in_dim (int, optional) – Size of each input sample features. Defaults to 20.

  • out_dim (int) – Size of each hidden embeddings. Defaults to 20.

  • layer_num (int, optional) – Number of layers. Defaults to 2.

  • init_emb – (FloatTensor, optional): The initial embeddings. Defaults to None, which will use TSVD as initial embeddings.

  • init_emb_grad (bool optional) – Whether to set the initial embeddings to be trainable. (default: False)

  • lamb_d (float, optional) – Balances the direction loss contributions of the overall objective. (default: 1.0)

  • lamb_t (float, optional) – Balances the triangle loss contributions of the overall objective. (default: 1.0)

forward()torch.FloatTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class SiGAT(node_num: int, edge_index_s, in_dim: int = 20, out_dim: int = 20, init_emb: Optional[torch.FloatTensor] = None, init_emb_grad: bool = True, **kwargs)[source]

The signed graph attention network model (SiGAT) from the “Signed Graph Attention Networks” paper.

Parameters
  • node_num ([type]) – Number of node.

  • edge_index_s (list) – The edgelist with sign. (e.g., [[0, 1, -1]] )

  • in_dim (int, optional) – Size of each input sample features. Defaults to 20.

  • out_dim (int) – Size of each output embeddings. Defaults to 20.

  • init_emb – (FloatTensor, optional): The initial embeddings. Defaults to None, which will use TSVD as initial embeddings.

  • init_emb_grad (bool optional) – Whether to set the initial embeddings to be trainable. (default: False)

forward()torch.FloatTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class SGCN(node_num: int, edge_index_s: torch.LongTensor, in_dim: int = 64, out_dim: int = 64, layer_num: int = 2, init_emb: Optional[torch.FloatTensor] = None, init_emb_grad: bool = False, lamb: float = 5, norm_emb: bool = False, **kwargs)[source]

The signed graph convolutional network model from the “Signed Graph Convolutional Network” paper. Internally, the first part of this module uses the torch_geometric.nn.conv.SignedConv operator. We have made some modifications to the original model torch_geometric.nn.SignedGCN for the uniformity of model inputs.

Parameters
  • node_num (int) – The number of nodes.

  • edge_index_s (LongTensor) – The edgelist with sign. (e.g., torch.LongTensor([[0, 1, -1], [0, 2, 1]]) )

  • in_dim (int, optional) – Size of each input sample features. Defaults to 64.

  • out_dim (int, optional) – Size of each output embeddings. Defaults to 64.

  • layer_num (int, optional) – Number of layers. Defaults to 2.

  • init_emb – (FloatTensor, optional): The initial embeddings. Defaults to None, which will use TSVD as initial embeddings.

  • init_emb_grad (bool optional) – Whether to set the initial embeddings to be trainable. (default: False)

  • lamb (float, optional) – Balances the contributions of the overall objective. (default: 5)

  • norm_emb (bool, optional) – Whether to normalize embeddings. (default: False)

forward()torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class SNEA(node_num: int, edge_index_s: torch.LongTensor, in_dim: int = 64, out_dim: int = 64, layer_num: int = 2, init_emb: Optional[torch.FloatTensor] = None, init_emb_grad: bool = True, lamb: float = 4)[source]

The signed graph attentional layers operator from the “Learning Signed Network Embedding via Graph Attention” paper :param node_num: The number of nodes. :type node_num: int :param edge_index_s: The edgelist with sign. (e.g., torch.LongTensor([[0, 1, -1], [0, 2, 1]]) ) :type edge_index_s: LongTensor :param in_dim: Size of each input sample features. Defaults to 64. :type in_dim: int, optional :param out_dim: Size of each output embeddings. Defaults to 64. :type out_dim: int, optional :param layer_num: Number of layers. Defaults to 2. :type layer_num: int, optional :param init_emb: (FloatTensor, optional): The initial embeddings. Defaults to None, which will use TSVD as initial embeddings. :param init_emb_grad: Optimize initial embeddings or not. :type init_emb_grad: bool, optional :param lamb: Balances the contributions of the overall

objective. (default: 4)

forward()torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class SNEAConv(in_dim: int, out_dim: int, first_aggr: bool, bias: bool = True, norm_emb: bool = True, add_self_loops=True, **kwargs)[source]

The signed graph attentional layers operator from the “Learning Signed Network Embedding via Graph Attention” paper

\[ \begin{align}\begin{aligned}\mathbf{h}_{i}^{\mathcal{B}(l)}=\tanh \left(\sum_{j \in \hat{\mathcal{N}}_{i}^{+}, k \in \mathcal{N}_{i}^{-}} \alpha_{i j}^{\mathcal{B}(l)} \mathbf{h}_{j}^{\mathcal{B}(l-1)} \mathbf{W}^{\mathcal{B}(l)} +\alpha_{i k}^{\mathcal{B}(l)} \mathbf{h}_{k}^{\mathcal{U}(l-1)} \mathbf{W}^{\mathcal{B}(l)}\right)\\\mathbf{h}_{i}^{\mathcal{U}(l)}=\tanh \left(\sum_{j \in \hat{\mathcal{N}}_{i}^{+}, k \in \mathcal{N}_{i}^{-}} \alpha_{i j}^{\mathcal{U}(l)} \mathbf{h}_{j}^{\mathcal{U}(l-1)} \mathbf{W}^{\mathcal{U}(l)} +\alpha_{i k}^{\mathcal{U}(l)} \mathbf{h}_{k}^{\mathcal{B}(l-1)} \mathbf{W}^{\mathcal{U}(l)}\right)\end{aligned}\end{align} \]

In case first_aggr is False, the layer expects x to be a tensor where x[:, :in_dim] denotes the positive node features \(\mathbf{X}^{(\textrm{pos})}\) and x[:, in_dim:] denotes the negative node features \(\mathbf{X}^{(\textrm{neg})}\).

Parameters
  • in_dim (int or tuple) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.

  • out_dim (int) – Size of each output sample.

  • first_aggr (bool) – Denotes which aggregation formula to use.

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

message(x1_j: torch.Tensor, x2_j: torch.Tensor, x1_i: torch.Tensor, x2_i: torch.Tensor, edge_p: torch.Tensor, alpha_func, index: torch.Tensor, ptr: Optional[torch.Tensor], size_i: Optional[int])torch.Tensor[source]

Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in edge_index. This function can take any argument as input which was initially passed to propagate(). Furthermore, tensors passed to propagate() can be mapped to the respective nodes \(i\) and \(j\) by appending _i or _j to the variable name, .e.g. x_i and x_j.

class SGCNConv(in_dim: int, out_dim: int, first_aggr: bool, bias: bool = True, norm_emb: bool = False, **kwargs)[source]

The signed graph convolutional operator from the “Signed Graph Convolutional Network” paper

\[ \begin{align}\begin{aligned}\mathbf{x}_v^{(\textrm{pos})} &= \mathbf{\Theta}^{(\textrm{pos})} \left[ \frac{1}{|\mathcal{N}^{+}(v)|} \sum_{w \in \mathcal{N}^{+}(v)} \mathbf{x}_w , \mathbf{x}_v \right]\\\mathbf{x}_v^{(\textrm{neg})} &= \mathbf{\Theta}^{(\textrm{neg})} \left[ \frac{1}{|\mathcal{N}^{-}(v)|} \sum_{w \in \mathcal{N}^{-}(v)} \mathbf{x}_w , \mathbf{x}_v \right]\end{aligned}\end{align} \]

if first_aggr is set to True, and

\[ \begin{align}\begin{aligned}\mathbf{x}_v^{(\textrm{pos})} &= \mathbf{\Theta}^{(\textrm{pos})} \left[ \frac{1}{|\mathcal{N}^{+}(v)|} \sum_{w \in \mathcal{N}^{+}(v)} \mathbf{x}_w^{(\textrm{pos})}, \frac{1}{|\mathcal{N}^{-}(v)|} \sum_{w \in \mathcal{N}^{-}(v)} \mathbf{x}_w^{(\textrm{neg})}, \mathbf{x}_v^{(\textrm{pos})} \right]\\\mathbf{x}_v^{(\textrm{neg})} &= \mathbf{\Theta}^{(\textrm{pos})} \left[ \frac{1}{|\mathcal{N}^{+}(v)|} \sum_{w \in \mathcal{N}^{+}(v)} \mathbf{x}_w^{(\textrm{neg})}, \frac{1}{|\mathcal{N}^{-}(v)|} \sum_{w \in \mathcal{N}^{-}(v)} \mathbf{x}_w^{(\textrm{pos})}, \mathbf{x}_v^{(\textrm{neg})} \right]\end{aligned}\end{align} \]

otherwise. In case first_aggr is False, the layer expects x to be a tensor where x[:, :in_dim] denotes the positive node features \(\mathbf{X}^{(\textrm{pos})}\) and x[:, in_dim:] denotes the negative node features \(\mathbf{X}^{(\textrm{neg})}\).

Parameters
  • in_dim (int or tuple) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.

  • out_dim (int) – Size of each output sample.

  • first_aggr (bool) – Denotes which aggregation formula to use.

  • norm_emb (bool) – Whether to normalize embeddings. (default: False)

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

  • norm_emb – Denotes embedding is normalized or not. (default: False)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

forward(x: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]], pos_edge_index: Union[torch.Tensor, torch_sparse.tensor.SparseTensor], neg_edge_index: Union[torch.Tensor, torch_sparse.tensor.SparseTensor])torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

message(x_j: torch.Tensor)torch.Tensor[source]

Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in edge_index. This function can take any argument as input which was initially passed to propagate(). Furthermore, tensors passed to propagate() can be mapped to the respective nodes \(i\) and \(j\) by appending _i or _j to the variable name, .e.g. x_i and x_j.

message_and_aggregate(adj_t: torch_sparse.tensor.SparseTensor, x: Tuple[torch.Tensor, torch.Tensor])torch.Tensor[source]

Fuses computations of message() and aggregate() into a single function. If applicable, this saves both time and memory since messages do not explicitly need to be materialized. This function will only gets called in case it is implemented and propagation takes place based on a torch_sparse.SparseTensor.

The MSGNN model for link prediction from the MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian paper.

Parameters
  • num_features (int) – Size of each input sample.

  • hidden (int, optional) – Number of hidden channels. Default: 2.

  • K (int, optional) – Order of the Chebyshev polynomial. Default: 2.

  • q (float, optional) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.

  • label_dim (int, optional) – Number of output classes. Default: 2.

  • activation (bool, optional) – whether to use activation function or not. (default: True)

  • trainable_q (bool, optional) – whether to set q to be trainable or not. (default: False)

  • layer (int, optional) – Number of MSConv layers. Deafult: 2.

  • dropout (float, optional) – Dropout value. (default: 0.5)

  • normalization (str, optional) – The normalization scheme for the signed directed Laplacian (default: sym): 1. None: No normalization \(\mathbf{L} = \bar{\mathbf{D}} - \mathbf{A} Hadamard \exp(i \Theta^{(q)})\) 2. "sym": Symmetric normalization \(\mathbf{L} = \mathbf{I} - \bar{\mathbf{D}}^{-1/2} \mathbf{A} \bar{\mathbf{D}}^{-1/2} Hadamard \exp(i \Theta^{(q)})\)

  • cached (bool, optional) – If set to True, the layer will cache the __norm__ matrix on first execution, and will use the cached version for further executions. This parameter should only be set to True in transductive learning scenarios. (default: False)

  • conv_bias (bool, optional) – Whether to use bias in the convolutional layers, default True.

  • absolute_degree (bool, optional) – Whether to calculate the degree matrix with respect to absolute entries of the adjacency matrix. (default: True)

Making a forward pass of the MagNet node classification model.

Arg types:
  • real, imag (PyTorch Float Tensor) - Node features.

  • edge_index (PyTorch Long Tensor) - Edge indices.

  • query_edges (PyTorch Long Tensor) - Edge indices for querying labels.

  • edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • log_prob (PyTorch Float Tensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).

class MSGNN_node_classification(num_features: int, hidden: int = 2, q: float = 0.25, K: int = 2, label_dim: int = 2, activation: bool = False, trainable_q: bool = False, layer: int = 2, dropout: float = False, normalization: str = 'sym', cached: bool = False, conv_bias: bool = True, absolute_degree: bool = True)[source]

The MSGNN model for node classification from the MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian paper.

Parameters
  • num_features (int) – Size of each input sample.

  • hidden (int, optional) – Number of hidden channels. Default: 2.

  • K (int, optional) – Order of the Chebyshev polynomial. Default: 2.

  • q (float, optional) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.

  • label_dim (int, optional) – Number of output classes. Default: 2.

  • activation (bool, optional) – whether to use activation function or not. (default: False)

  • trainable_q (bool, optional) – whether to set q to be trainable or not. (default: False)

  • layer (int, optional) – Number of MSConv layers. Deafult: 2.

  • dropout (float, optional) – Dropout value. (default: False)

  • normalization (str, optional) – The normalization scheme for the signed directed Laplacian (default: sym): 1. None: No normalization \(\mathbf{L} = \bar{\mathbf{D}} - \mathbf{A} \odot \exp(i \Theta^{(q)})\) 2. "sym": Symmetric normalization \(\mathbf{L} = \mathbf{I} - \bar{\mathbf{D}}^{-1/2} \mathbf{A} \bar{\mathbf{D}}^{-1/2} \odot \exp(i \Theta^{(q)})\) odot denotes the element-wise multiplication.

  • cached (bool, optional) – If set to True, the layer will cache the __norm__ matrix on first execution, and will use the cached version for further executions. This parameter should only be set to True in transductive learning scenarios. (default: False)

  • conv_bias (bool, optional) – Whether to use bias in the convolutional layers, default True.

  • absolute_degree (bool, optional) – Whether to calculate the degree matrix with respect to absolute entries of the adjacency matrix. (default: True)

forward(real: torch.FloatTensor, imag: torch.FloatTensor, edge_index: torch.LongTensor, edge_weight: Optional[torch.LongTensor] = None)torch.FloatTensor[source]

Making a forward pass of the MagNet node classification model.

Arg types:
  • real, imag (PyTorch Float Tensor) - Node features.

  • edge_index (PyTorch Long Tensor) - Edge indices.

  • edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.

Return types:
  • z (PyTorch FloatTensor) - Embedding matrix, with shape (num_nodes, 2*hidden) for undirected graphs and (num_nodes, 4*hidden) for directed graphs.

  • output (PyTorch FloatTensor) - Log of prob, with shape (num_nodes, num_clusters).

  • predictions_cluster (PyTorch LongTensor) - Predicted labels.

  • prob (PyTorch FloatTensor) - Probability assignment matrix of different clusters, with shape (num_nodes, num_clusters).

class MSConv(in_channels: int, out_channels: int, K: int, q: float, trainable_q: bool, normalization: str = 'sym', bias: bool = True, cached: bool = False, absolute_degree: bool = True, **kwargs)[source]

Magnetic Signed Laplacian Convolution Layer from the MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian paper.

Parameters
  • in_channels (int) – Size of each input sample.

  • out_channels (int) – Size of each output sample.

  • K (int) – Chebyshev filter size \(K\).

  • q (float, optional) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.

  • trainable_q (bool, optional) – whether to set q to be trainable or not. (default: False)

  • normalization (str, optional) – The normalization scheme for the magnetic Laplacian (default: sym): 1. None: No normalization \(\mathbf{L} = \bar{\mathbf{D}} - \mathbf{A} \odot \exp(i \Theta^{(q)})\) 2. "sym": Symmetric normalization \(\mathbf{L} = \mathbf{I} - \bar{\mathbf{D}}^{-1/2} \mathbf{A} \bar{\mathbf{D}}^{-1/2} \odot \exp(i \Theta^{(q)})\) odot denotes the element-wise multiplication.

  • cached (bool, optional) – If set to True, the layer will cache the __norm__ matrix on first execution, and will use the cached version for further executions. This parameter should only be set to True in transductive learning scenarios. (default: False)

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

  • absolute_degree (bool, optional) – Whether to calculate the degree matrix with respect to absolute entries of the adjacency matrix. (default: True)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

forward(x_real: torch.FloatTensor, x_imag: torch.FloatTensor, edge_index: torch.LongTensor, edge_weight: Optional[torch.Tensor] = None, lambda_max: Optional[torch.Tensor] = None)torch.FloatTensor[source]

Making a forward pass of the Signed Directed Magnetic Laplacian Convolution layer.

Arg types:
  • x_real, x_imag (PyTorch Float Tensor) - Node features.

  • edge_index (PyTorch Long Tensor) - Edge indices.

  • edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.

  • lambda_max (optional, but mandatory if normalization is None) - Largest eigenvalue of Laplacian.

Return types:
  • out_real, out_imag (PyTorch Float Tensor) - Hidden state tensor for all nodes, with shape (N_nodes, F_out).

message(x_j, norm)[source]

Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in edge_index. This function can take any argument as input which was initially passed to propagate(). Furthermore, tensors passed to propagate() can be mapped to the respective nodes \(i\) and \(j\) by appending _i or _j to the variable name, .e.g. x_i and x_j.

Auxiliary Methods and Layers

class complex_relu_layer[source]

The complex ReLU layer from the MagNet: A Neural Network for Directed Graphs. paper.

complex_relu(real: torch.FloatTensor, img: torch.FloatTensor)[source]

Complex ReLU function.

Arg types:
  • real, imag (PyTorch Float Tensor) - Node features.

Return types:
  • real, imag (PyTorch Float Tensor) - Node features after complex ReLU.

forward(real: torch.FloatTensor, img: torch.FloatTensor)[source]

Making a forward pass of the complex ReLU layer.

Arg types:
  • real, imag (PyTorch Float Tensor) - Node features.

Return types:
  • real, imag (PyTorch Float Tensor) - Node features after complex ReLU.