Class List

Here are the classes, structs, unions and interfaces with brief descriptions:

[detail level 12]

gnn | |

AbsError | Operator for abs error |

ArgMax | Operator for argument maximum |

Axpby | Operator: same axpby in blas |

BinaryLogLoss | Operator for binary log loss |

ConcatCols | Operator: concat two or more tensors (matrix) on cols; they should have same rows |

CrossEntropy | Operator for cross entropy |

ElewiseAdd | Operator: element-wise add of two or more tensors; broadcast only support two-tensor add |

ElewiseMinus | Operator: element-wise minus of two tensors; broadcast support |

ElewiseMul | Operator: element-wise mul of two or more tensors; broadcast only support two-tensor mul |

Entropy | Operator for calculating entropy |

Exp | Exp operator |

Factor | Abstract class of operators. Since we represent the computation graph as a factor graph, the factors here represent the relations between variables |

FactorGraph | Computation graph; responsible for representing the factor graph, as well as the execution |

FullyConnected | Fully connected operator |

HitAtK | Operator: whether the top-k predictions hit the label set |

Identity | Identity |

InTopK | Operator: whether the true label is in top-k of prediction |

IsEqual | |

JaggedSoftmax | Jagged softmax activation operator |

Kxplusb | Operator: y = k * x + b |

L2ColNorm | Normalize each row of the matrix |

MatMul | Matrix multiplication operator |

MovingNorm | |

IMsgPass | Construct a sparse matrix from graph; used for message passing; |

Node2NodeMsgPass | |

Edge2NodeMsgPass | |

Node2EdgeMsgPass | |

Edge2EdgeMsgPass | |

SubgraphMsgPass | |

MultiMatMul | Multiple pairs of matrix multiplication operator; this is used to save space (i.e., instead of multiple matmul + elewise-add layer) |

MultinomialSample | Class for multinomial sample |

NatLog | Natural log unit operator |

OneHot | Class for one hot sparse representation |

IOptimizer | Abstract class for optimizer |

SGDOptimizer | Class for simple sgd optimizer |

MomentumSGDOptimizer | Class for momentum sgd optimizer |

AdamOptimizer | Class for adam optimizer |

ParamSet | Set of learnable params |

Reduce | The reduction operator |

ReduceMean | The reduction operator for calculating mean value |

ReLU | Rectifer linear unit operator |

RowSelection | Operator: row selection |

Sigmoid | Sigmoid operator |

Softmax | Softmax activation operator |

SquareError | Operator for square error |

Tanh | Rectifer linear unit operator |

TypeCast | Operator used for casting type |

IDifferentiable | Interface for differentiable variables |

Variable | Abstract class of variable; Variables are the objects which hold the inputs to the operators, as well as the outputs from operators |

GraphVar | Class for graph variable |

TensorVarTemplate | Implementation of TensorVar; |

TensorVar | Class for tensor variable, which is the most common variable in this package |

TensorVarTemplate< mode, DENSE, Dtype > | DENSE tensor specialization of TensorVar |

TensorVarTemplate< mode, SPARSE, Dtype > | SPARSE tensor specialization of TensorVar |

BinaryMul | Class for binary mul |

BinaryEngine | Class for binary engine |

TensorTemplate< CPU, DENSE, Dtype > | CPU DENSE specialization of tensor |

TensorTemplate< CPU, DENSE, int > | CPU DENSE int tensor specialization; this tensor is not used for heavy computation (e.g., matmul) |

TensorTemplate< CPU, SPARSE, Dtype > | CPU SPARSE specialization of Tensor |

TensorTemplate< CPU, SPARSE, int > | CPU SPARSE int tensor specialization; this tensor is not used for heavy computation (e.g., matmul) |

UnarySet< CPU, Dtype > | CPU specialization of UnarySet |

UnaryTruncate< CPU, Dtype > | CPU specialization of UnaryTruncate |

UnarySigmoid< CPU, Dtype > | CPU specialization of UnarySigmoid |

UnaryRandNorm< CPU, Dtype > | CPU specialization of UnaryRandNorm |

UnaryRandUniform< CPU, Dtype > | CPU specialization of UnaryRandUniform |

UnaryEngine< CPU > | Class for unary engine, CPU specialization |

NormalRandomizer | |

BinomialRandomizer | |

UniformRandomizer | |

ChisquareRandomizer | |

GpuHandle | |

TDataTemplate | Class for data template |

TData | Data object that used to keep the values of tensor |

TDataTemplate< CPU, DENSE, Dtype > | CPU DENSE specialization |

TDataTemplate< mode, SPARSE, Dtype > | SPARSE specialization of tensor data object |

TShape | Class for shape |

TensorTemplate | Implementation of abstract tensor |

Tensor | Abstract Class for tensor |

UnarySet | Functor to set an element |

UnaryScale | Functor to scale an element |

UnaryAdd | Functor to add an element |

UnaryRandNorm | Functor to set an element from normal distribution |

UnaryRandUniform | Functor to set an element from uniform distribution |

UnaryAbs | Functor to abs an element; CPU doesn't need this; |

UnaryInv | Functor to invert an element; CPU doesn't need this; |

UnaryReLU | Relu |

UnarySigmoid | Sigmoid |

UnaryTanh | Tanh |

UnarySquare | Functor to square an element; CPU doesn't need this; |

UnarySqrt | Functor to sqrt an element; CPU doesn't need this; |

UnaryInvSqrt | Functor to inv_sqrt an element; CPU doesn't need this; |

UnaryLog | Functor to log an element; CPU doesn't need this; |

UnaryExp | Functor to exp an element; CPU doesn't need this; |

UnaryTruncate | Functor to truncate an element; |

UnaryEngine | Class for unary engine |

CPU | CPU token; used for template parsing |

GPU | GPU token; used for template parsing |

DENSE | DENSE tensor token; used for template parsing |

SPARSE | SPARSE tensor token; used for template parsing |

LinkedTable | Class for linked table (an array of linked list) |

GraphStruct | (directed) graph |

MemHolder | Responsible for memory allocation and deletion; the dynamic computation graph will get better performance with persistent memory |

build_indices | |

build_indices< 0 > | |

indices |

Generated on Mon May 22 2017 15:12:11 for GraphNN by 1.8.6