API
Contents
Public Interface
Graph creation
ComputationGraphs.ComputationGraph — TypeStructure use to store the computation graph.
Fields:
nodes::Vector{AbstractNode}: vector with all nodes in the graph in topological order: children always appear after parents.children::Vector{Vector{Int}}: vector with all the children of each nodeparents::Vector{Vector{Int}}: vector with all the parents of each nodevalidValue::BitVector: boolean vector,trueindicates that node contains a valid value`computewithancestors::Vector{FunctionWrapper}: vector of function to compute each node all the required ancestors
count::Vector{UInt}: number of times each node has been computed since lastresetLog!
ComputationGraphs.@add — Macro@add graph expressionMacro to add a complex expression into a computation graph.
This macro "breaks" the complex expression to elementary subexpressions and add them all to the graph.
Parameters:
graph::ComputationGraph: graph where expression will be storedexpression::Expr: expression to be added to the graph
Returns:
Node::AbstractNode: graph node for the final expression
Example:
The following code provides two alternatives to create a computation graph to evaluate err = ||A *x -b ||^2
- without the @add macro
julia using ComputationGraphs gr=ComputationGraph(Float32) A = variable(gr,3,4) x = variable(gr,4) b = variable(gr,3) Ax = *(gr,A,x) Axb = -(gr,Ax,b) err = norm2(gr,Axb) display(gr) - without the @add macro
julia using ComputationGraphs gr=ComputationGraph(Float32) A = @add gr variable(3,4) x = @add gr variable(4) b = @add gr variable(3) err = @add gr norm2(times(A,x)-b) display(gr)
ComputationGraphs.variable — Functionvariable(graph,dims)
variable(graph,dims...)
variable(graph, value)Creates a variable of the given dimension
Parameters:
graph::ComputationGraph: Computation graph where variable will be stored.dims::NTuple{N,Int}: Desired dimension of the variable. An empty tuple () results in a scalar variable.value::AbstractArray: Initial value for the variable, which implicitly defines its dimension. Ifvalueis a scalar, it is first converted to a 0-dimensional array usingfill(value).
Returns:
- node of the computation graph associated with the variable created
ComputationGraphs.constant — Functionconstant(graph, value)Creates a (constant) array equal to the given value.
Parameters:
graph::ComputationGraph: Computation graph where the array will be stored.value::AbstractArray: Desired value for the array. Ifvalueis a scalar, it is first converted to a 0-dimensional array usingfill(value).
Returns:
- node of the computation graph associated with an array created
Base.zeros — Functionzeros(graph, dims)
zeros(graph, dims...)Creates an array filled with 0's
Parameters:
graph::ComputationGraph: computation graph where the array will be storeddims::NTuple{N,Int}: dimension of the array
Returns:
- node of the computation graph associated with an array filled with
zero(TypeValue)
Base.ones — Functionones(graph, dims)
ones(graph, dims...)Creates an array filled with 1's
Parameters:
graph::ComputationGraph: computation graph where array will be storeddims::NTuple{N,Int}: dimension of the array
Returns:
- node of the computation graph associated with an array filled with
one(TypeValue)
ComputationGraphs.unitvector — Functionunitvector(dims,k)Creates the k-th vector of the canonical basis for the linear space with dimension dims.
Base.size — Functionsize(node)
size(graph, node)Returns a tuple with the size of the array associated with a node of a computation graph.
size(node, dim)
size(graph, node, dim)Returns the size of the array associated with a node of a computation graph, along dimension dim.
Base.length — Functionlength(node)
length(graph, node)Returns the number of entries of the array associated with a node of a computation graph.
Number of nodes in the graph.
ComputationGraphs.typeofvalue — FunctionType of the node's value.
Base.similar — Functionsimilar(node)
similar(graph, node)Creates an uninitialized array with the same type and size as the graph node.
Base.eltype — Functioneltype(node)Returns the type of the entries of a node.
ComputationGraphs.memory — FunctionTotal memory for all the variables stored in the graph.
ComputationGraphs.nodeValue — FunctionnodeValue(node)Returns the current value of a node (without any evaluation)
Base.Multimedia.display — Functiondisplay(node)
display(nodes)Display one node of a computation graph or a tuple of nodes
display(graph;topTimes=false)Display the nodes of a computation graph.
When topTimes=true only displays the nodes with the largest total computation times (and hides information about parents/children).
display(graph,node;withParents=true)When withParents=true shows the full expression needed compute a specific node, otherwise only shows the specific node (as in display(node)).
Operations supported
Base.adjoint — Functionadjoint() computes adjoint/transpose of a vector or matrix
ComputationGraphs.adjoint_ — Functionadjoint() computes adjoint/transpose of a vector or matrix
ComputationGraphs.adjointTimes — FunctionadjointTimes(A,x)= A'*x computes the product of the adjoint of the matrix A with a matrix/vector x
ComputationGraphs.adjointTimesExpandColumns — FunctionadjointTimesExpandColumns(A,x,rows) = A'*expandColumns(x,rows) computes the product of the adjoint of the matrix A with expandColumns(x,rows)
ComputationGraphs.affine — Functionaffine(A,x,b) = A*x .+ b where b is a vector, x can be a vector or a matrix
ComputationGraphs.affineRows — FunctionaffineRows(A,x,b,rows) = (A*x+b)[rows,:]
ComputationGraphs.column — Functioncolumn(A,k) returns the column k of A as a vector
Missing docstring for LinearAlgebra.dot. Check Documenter's build log for details.
ComputationGraphs.divideScalar — FunctiondivideScalar(a, b) = a ./ b, where b is a scalar
ComputationGraphs.expandColumns — FunctionexpandColumns(a,rows,nRows)Expands a vector a into a matrix A as follows: Given an n-vector a , returns an nRows x n matrix A with A[i,j] = a[j] if i==rows[j] else 0
ComputationGraphs.expandColumnsTimesAdjoint — FunctionexpandColumnsTimesAdjoint(x,y,rows,nRows)=expandColumns(x,rows,nRows)*y'
ComputationGraphs.exponentScalar — FunctionexponentScalar(a, b) = a .^ b, where b is a scalar
ComputationGraphs.findMaxRow — Functiony=findMaxRow(A)Creates an integer-valued vector y with as many entries as columns of A, where y[j] is equal to the index of the row of the largest entry in columns j of A.
ComputationGraphs.huber — Functionhuber() computes the huber loss of a vector or matrix
ComputationGraphs.maxRow — FunctionmaxRow(A) computes a vector y with as many entries as columns of A, where y[j] is equal to the largest entry in columns j of A
ComputationGraphs.minus — Functionminus() unitary minus of a vector or matrix
ComputationGraphs.norm1 — Functionnorm1() computes the sum of the absolute values of a vector or matrix
ComputationGraphs.norm2 — Functionnorm2() computes the sum of the square values of a vector or matrix
ComputationGraphs.plus — Functiona + b addition operator
ComputationGraphs.pointDivide — FunctionpointDivide(a, b) = a ./ b
ComputationGraphs.pointTimes — FunctionpointTimes(a, b) = a .* b
ComputationGraphs.scalarDivide — FunctionscalarDivide(a, b) = a ./ b, where a is a scalar
ComputationGraphs.scalarPlus — FunctionscalarPlus(a, b) = a .+ b, where a is a scalar
ComputationGraphs.scalarTimes — FunctionscalarTimes(a,M)= a .* M computes the product of a scalar a by a matrix M
ComputationGraphs.selectRows — FunctionselectRows(A,rows) = y, where y[j] =A[rows[j],j]
ComputationGraphs.subtract — Functiona - b subtraction operator
ComputationGraphs.sumColumns — FunctionsumColumns(A) returns a vector with the sums of the columns of a matrix A
ComputationGraphs.sumExpandColumns — FunctionsumColumns(ExpandColumns(x,rows,nRows))
ComputationGraphs.times — Functiontimes(A,x) computes the product of a matrix A by a matrix/vector x
ComputationGraphs.timesAdjoint — FunctiontimesAdjoint(x, y') = x * y'
ComputationGraphs.timesAdjointOnes — FunctiontimesAdjointOnes(x,n)=x*ones(n)
ComputationGraphs.unitTimesAdjoint — FunctionunitTimesAdjoint(y,dims,k) = unitvector(dims,k)*y'
Base.:+ — Functiona + b addition operator
Base.:- — Function-() unitary minus operator for a vector or matrix
a - b subtraction operator
Base.:* — Functiona * b maps to times() or scalarTimes() depending on the sizes of the arguments
Base.:^ — Functiona ^ b maps to exponentScalar(a,b)
LogExpFunctions.logistic — Functionlogistics(x)=1/(1+exp(-x)) computes the logistics function of all entries of a vector or matrix
ComputationGraphs.relu — Functionrelu() computes the relu (max with 0) of all entries of a vector or matrix
ComputationGraphs.ddlogistic — Functionddlogistic(x)=(exp(-2x)-exp(-x)) /(1+exp(-x))^3computes the 2nd-derivative of the logistics function of all entries of a vector or matrix
ComputationGraphs.dlogistic — Functiondlogistics(x)=exp(-x)/(1+exp(-x))^2computes the derivative of the logistics function of all entries of a vector or matrix
Base.exp — Functionexp() computes the exponential of all entries of a vector or matrix
ComputationGraphs.heaviside — Functionheaviside() computes the heaviside (>0 indicator) of all entries of a vector or matrix
ComputationGraphs.sat — Functionsat() computes the saturation function of all entries of a vector or matrix
Base.sign — Functionsign() computes the sign function of all entries of a vector or matrix
Base.sqrt — Functionsqrt() takes the square root of all entries of a vector or matrix
Differentiation
ComputationGraphs.D — FunctionY = D(graph, F, P)
Y = D(graph, V, F, P)Computes the partial derivative of the expression encoded in the node F with respect to the variable encoded in the node P, along the direction V. Formally, Y is a scalar/vector/matrix with the same size as the variable F, with its jth entry equal to
$Y[j] = \sum_i V[i] \nabla_{P[j]} F[i]$
where $\nabla_{X[j]} F[i]$ the partial derivative of the ith entry of F with respect to the jth entry of P.
The direction V can be omitted when F is a scalar, in which case
$Y[j] = \nabla_{P[j]} F$
Parameters
graph::ComputationGraph: Computation graph encoding the relevant expressions and variables.V::Node: Direction with respect the partial derivative is computed. This node needs to have the same size asF.F::Node: Expression to be differentiated.P::NodeVariable: Variable with respect toFwill be differentiated. This node must have been created using variable
Returns
Y::Node: Node that encodes the expression of the partial derivative (added to the graph if it was not already part of it.) This node will have the same size asP.
ComputationGraphs.hessian — FunctionY = hessian(graph, F, P, Q)Computes the Hessian matrix of the expression encoded in the (scalar-valued) node F with respect to the variables encoded in the (vector-values) nodes P and Q. Formally, Y is a matrix with its (i,j)th entry equal to
$Y[i,j] = \nabla_{P[i]} \nabla_{Q[j]} F$
where $\nabla_{X}$ denotes partial derivative with respect to X.
Parameters
graph::ComputationGraph: Computation graph encoding the relevant expressions and variables.F::Node: Expression to be differentiated.P::NodeVariable: First variable with respect toFwill be differentiated. This node must have been created using variableQ::NodeVariable: Second variable with respect toFwill be differentiated. This node must have been created using variable
Returns
Y::Node: Node that encodes the expression of the Hessian matrix (added to the graph if it was not already part of it.)
Graph computations
ComputationGraphs.set! — Functionset!(graph,node,value)
set!(graph,nodes,values)Update a variable node
- set value of a variable node
- mark all the children as having invalid values
ComputationGraphs.compute! — FunctionRecompute the whole graph
Recompute only what is needed to get a node
Recompute only what is needed to get a vector/tuple of node
Base.get — FunctionGet the value of a node, performing whatever computations are needed.
Get the values of a list of node.
Base.copyto! — Function- performing whatever computations are need for source node to be valid
- copy value of source to destination node
- mark all children of the destination node as having invalid values
Recipes
ComputationGraphs.gradDescent! — Function(;next_theta,eta,gradients) = gradDesc(graph; loss, theta)Recipe used to performs the computations needed by the classical gradient descent algorithm to minimize a (scalar-valued) loss function $J( heta)$ by adjusting a set of optimization parameters $\theta$, according to
\[ \theta^+ = \theta - \eta\, \nabla_\theta J(\theta)\]
Parameters:
graph::ComputationGraph; Computation graph that is updated "in-place" by adding to it all the nodes needed to perform one step of gradient descent.loss::Node: Scalar-valued computation node that corresponds to the loss function $J(\theta)$theta::NamedTuple`: Named tuple with the variable nodes that correspond to the optimization parameters $\theta\$.
Returns: named tuple with
eta::Node: Scalar-valued variable node that can be used to set the learning rate $\eta$.next_theta::Tuple: Named tuple with the computation nodes that holds the value $\theta^+$ of the optimization parameters after one gradient descent iterationgradients::NamedTuple: Named tuple of the computation nodes that hold the value of the gradients of the loss function with respect to the different variables intheta.
Example:
using ComputationGraphs
graph = ComputationGraph(Float64)
# Define optimization parameters and loss function
A = variable(graph, 4, 3)
x = variable(graph, 3)
b = variable(graph, 4)
loss = @add graph norm2(times(A, x) - b)
# Call gradDescent! recipe
theta = (;x,)
(; next_theta, eta, gradients) = gradDescent!(graph; loss, theta)
# Set fixed parameters
set!(graph, A, [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0; 10.0 11.0 12.0])
set!(graph, b, [2.0, 2.0, 2.0, 2.0])
# Set learning rate
set!(graph, eta, 0.001)
# Initialize optimization parameter
set!(graph, x, [1.0, 1.0, 1.0])
println("initial loss: ", get(graph,loss))
# Gradient descent loop
for i in 1:100
compute!(graph, next_theta) # compute next value of theta
copyto!(graph, theta, next_theta) # execute update
end
println("final loss: ", get(graph,loss))ComputationGraphs.adam! — Function(; eta, beta1, beta2, epsilon,
init_state, state, next_state,
next_theta, gradients) = adam!(graph; loss, theta)Recipe used to performs the computations needed by the Adam method to minimize a (scalar-valued) loss function $J(\theta)$ by adjusting a set of optimization parameters $\theta$.
The algorithm is described in Adam, using the comment just before section 2.1 for a more efficient implementation.
Parameters:
graph::ComputationGraph; Computation graph that is updated "in-place" by adding to it all the nodes needed to perform one step of gradient descent.loss::Node: Scalar-valued computation node that corresponds to the loss function $J(\theta)$theta::NamedTuple: Named tuple with the variable nodes that correspond to the optimization parameters $\theta\$.
Returns: named tuple the following nodes/tuples of nodes
eta: Scalar-valued variable node used to set the learning rate $\eta$.beta1: Scalar-valued variable node used to set Adam's beta1 parameter.beta2: Scalar-valued variable node used to set Adam's beta2 parameter.epsilon: Scalar-valued variable node used to set Adam's epsilon parameter.init_state,state,next_state: Adam's internal state initializer, current value, and next value, which include the iteration number and the 2 momentsnext_theta::Tuple: value $\theta^+$ of the optimization parameters after one gradient descent iterationgradients: gradients of the loss function with respect to the different variables intheta.
Example:
ComputationGraphs.denseChain! — Function(; inference, training, theta) = denseChain!(graph;
nNodes, inferenceBatchSize, trainingBatchSize, activation,loss)Recipe used construct a graph for inference and training of a dense forward neural network.
x[1] = input
x[2] = activation(W[1] * x[1] + b[1])
...
x[N-1] = activation(W[N-2] * x[N-2] + b[N-2])
output = W[N-1] * x[N-1] + b[N-1] # no activation in the last layerwith a loss function of
loss = lossFunction(output-reference)Parameters:
graph::ComputationGraph; Computation graph that is updated "in-place" by adding to it all the nodes needed to perform one step of gradient descent.nNodes::Vector{Int}: Vector with the number of nodes in each layer, starting from the input and ending at the output layer.inferenceBatchSize::Int=1: Number of inputs for each inference batch. WheninferenceBatchSize=0no nodes will be created for inference.trainingBatchSize::Int=0: Number of inputs for each training batch. WhentrainingBatchSize=0no nodes will be created for training.activation::Function=ComputationGraphs.relu: Activation function. Use theidentityfunction if no activation is desired.loss::Symbol=:mse: Desired type of loss function, among the options:+ :sse = sum of square error + :mse = mean-square error (i.e., sse normalized by the error size) + :huber = huber function on the error + :mhuber = huber function on the error, normalized by the error size
Returns: named tuple with the following fields
inference::NamedTuple: named tuple with the inference nodes:+ `input` NN input for inference + `output` NN output for inference When `inferenceBatchSize=0` this tuple is returned empty
+ training::NamedTuple: named tuple with the training nodes: + input NN input for training + output NN output for training + reference NN desired output for training + loss NN loss for training
When `trainingBatchSize=0` this tuple is returned emptytheta::NamedTuple: named tuple with the NN parameters (all the matrices W and b)
Example:
using ComputationGraphs, Random
graph=ComputationGraph(Float32)
(; inference, training, theta)=denseChain!(graph;
nNodes=[1,20,20,20,2], inferenceBatchSize=1, trainingBatchSize=3,
activation=ComputationGraphs.relu, loss=:mse)
# (repeatable) random initialization of the weights
Random.seed!(0)
for k in eachindex(theta)
set!(graph,theta[k],randn(Float32,size(theta[k])))
end
# Compute output for a random input
input=randn(Float32,size(inference.input))
set!(graph,inference.input,input)
output=get(graph,inference.output)
println("input = ",input,", output = ",output)
# compute loss for a batch of random inputs and desired outputs (reference)
input=randn(Float32,size(training.input))
reference=randn(Float32,size(training.reference))
set!(graph,training.input,input)
set!(graph,training.reference,reference)
loss=get(graph,training.loss)
println("inputs = ",input,", loss = ",loss)ComputationGraphs.denseChain — FunctiondenseChain(TypeValue;
nNodes=[],
W=TypeArray{TypeValue,2}[],
b=TypeArray{TypeValue,1}[],
trainingBatchSize,
inferenceBatchSize,
activation=ComputationGraphs.relu,
loss::Symbol=:sse,
optimizer=NoOptimizer(),
includeGradients=false,
codeName="",
parallel=false
)Create computation graph for a dense forward neural network, defined as follows:
```
x[1] = input
z[k] = W[k] * x[k] + b[k] for k in 1,...,K
x[k+1] = activation(z[k]) for k in 1,...,K-1
output = z[K]
loss = norm2(output-desiredOutput)
g[loss,W[k]] = gradient(loss,W[k]) for k in 1,...,K
g[loss,b[k]] = gradient(loss,b[k]) for k in 1,...,K
```Parameters:
::Type{TypeValue}: default type for the values of the computation graph nodesnNodes::Vector{Int}=Int[]: vector with the number of nodes in each layer, starting from the input and ending at the output layer.W::Vector{TypeArray{TypeValue,2}}=TypeArray{TypeValue,2}[]:b::Vector{TypeArray{TypeValue,1}}=TypeArray{TypeValue,1}[]:trainingBatchSize::Int:inferenceBatchSize::Int:activation::Function=ComputationGraphs.relu:loss::Symbol=:sse:optimizer::Op=NoOptimizer():includeGradients::Bool=false:codeName::String="":parallel::Bool=false:
Returns: Named tuple with fields
graphioNodesparameterNodestrainingNodesoptimizerNodescodenOpsI2O
Number of forward operations to compute output:
z[k]:
- # prods =
sum(size(W[k],2)*(size(W[k],1)) for k in 1:K) - # sums =
sum(size(W[k],2)*(size(W[k],1)) for k in 1:K)
- # prods =
x[k+1]:
- # activation =
sum(size(W[k],2) for k in 1:K-1)
- # activation =
ComputationGraphs.denseQlearningChain — FunctionCreate computation graph for a dense forward neural network used to store reinforcement learning's Q-function.
ComputationGraphs.denseChain_FluxZygote — FunctionConstruct dense forward neural network using Flux+Zygote
ComputationGraphs.denseChain_FluxEnzyme — FunctionConstruct dense forward neural network using Flux+Enzyme
Parallelization
ComputationGraphs.computeSpawn! — FunctioncomputeSpawn!(graph)Spans a set of tasks for parallel evaluation of a computation graph.
ComputationGraphs.syncValid — FunctionsyncValid(graph)Updates graph.validEvents::Threads.Event with graph.validValues::BitValue.
Usage:
- This function is automatically called from within ComputationGraphs.computeSpawn!(graph).
- It needs to be explicitly called if ComputationGraphs.set! or ComputationGraphs.copyto! is called upon any variable after ComputationGraphs.computeSpawn!(graph) was issued.
ComputationGraphs.request — Functionrequest(graph, node::Node)
request(graph, node::NTuple{Node})
request(graph, node::NamedTuple{Node})Requests parallel evaluation of a node or a tuple of nodes.
Presumes a previous call to computeSpawn!(graph)
Base.wait — Functionwait(graph, node::Node)
wait(graph, node::NTuple{Node})
wait(graph, node::NamedTuple{Node})Waits for the evaluation of a node or a tuple of nodes, after an appropriate computation request made using request(graph, node(s))
Presumes a previous call to computeSpawn!(graph)
ComputationGraphs.computeUnspawn! — FunctioncomputeUnspawn!(graph)Terminates the tasks spawned by ComputationGraphs.computeSpawn!(graph)
Code generation
ComputationGraphs.Code — TypeStructure used to generate dedicated code:
Fields
parallel::Bool=false:1) When `true` the `valid` flags are implemented with `Threads.Event`s, otherwise just a `Bool` 2) When `true` each node has a `Threads.Task` that computes the node as needed.unrolled::Bool=false:1) When `true`, the code generated for `get` uses a single function with nested `if` statements to compute nodes on demand. This can lead to very large functions for big graphs. Parallel computation is not supported in this mode. 2) When `false`, each node has its own `compute` function that (recursively) calls the parents' `compute` functions.count::Bool=true: Whentrue, the generated code includes counters for how many times each node's computation function has been called.
ComputationGraphs.sets! — FunctionAdd set!'s to code
ComputationGraphs.computes! — FunctionAdd computes's to code
ComputationGraphs.gets! — FunctionAdd get!'s to code
ComputationGraphs.copies! — FunctionAdd copyto!'s to code
Internal functions
Graph definition
ComputationGraphs.@newnode — Macro@newnode name{C1,...,C2}::outputShape
@newnode name{Nparameters,C1,...,C2}::outputShapeMacro used to create a new computation node type, where
C1,...,C2represent the operandsNparameters(optional) represents the number of parameters, which are fixed (as opposed to the operands)outputShapeis the size of the result andcan be a constant Tuple, as in
@newnode norm2{x}::()can use C1,...,C2 (especially their sizes), e.g.,
@newnode mult!{A,B}::(size(C1,1),size(C2,2))can use the values of the parameters, denoted by
par1,par2, ...; as in@newnode timesAdjointOnes{1,x}::(size(x, 1), par1)
This macro then generates
"""
Node of a computation graph used to represent the result of name()
"""
struct NodeName{TP<:Tuple,TPI<:Tuple,TV<:AbstractArray,TC} <: ComputationGraphs.AbstractNode
id::Int
parameters::TP
parentIds::TPI
value::TV
compute!::TC
end
export name
name(graph::ComputationGraph,C1::T1,C2::T2,par1,par2,
) where {T1<:AbstractNode,T2<:AbstractNode} =
push!(graph,NodeName,cg_name!,(par1,par2),(C1.id,C2.id),(c1.value,C2.value),outputShape)Base.push! — FunctionAdd node to graph (avoiding repeated nodes).
ComputationGraphs.nodesAndParents — FunctionList with all the parents of a set of node.
ComputationGraphs.add2children — FunctionAdd node id to all its parents, parents's parents, etc.
ComputationGraphs.children — FunctionList with all the children of a set of node.
ComputationGraphs.AbstractNode — TypeAll nodes
ComputationGraphs.AbstractConstantNode — TypeNodes that never change (no sets & zero derivative)
ComputationGraphs.AbstractSpecialNode — TypeNodes for which "shortcuts" in computation are possible
ComputationGraphs.noComputation — FunctionNodes that do not require re-computation after creation
ComputationGraphs.NodePlus — TypeNode of a computation graph used to represent the result of plus()
ComputationGraphs.NodeAdjoint_ — TypeNode of a computation graph used to represent the result of adjoint_()
ComputationGraphs.NodeAdjointTimes — TypeNode of a computation graph used to represent the result of adjointTimes()
ComputationGraphs.NodeAdjointTimesExpandColumns — TypeNode of a computation graph used to represent the result of adjointTimesExpandColumns()
ComputationGraphs.NodeAffine — TypeNode of a computation graph used to represent the result of affine()
ComputationGraphs.NodeAffineRows — TypeNode of a computation graph used to represent the result of affineRows()
ComputationGraphs.NodeColumn — TypeNode of a computation graph used to represent the result of column()
ComputationGraphs.NodeConstant — TypeNode of a computation graph used to represent a constant whose value cannot be changed. It is created by constant().
ComputationGraphs.NodeDdlogistic — TypeNode of a computation graph used to represent the result of ddlogistic()
ComputationGraphs.NodeDivideScalar — TypeNode of a computation graph used to represent the result of divideScalar()
ComputationGraphs.NodeDlogistic — TypeNode of a computation graph used to represent the result of dlogistic()
ComputationGraphs.NodeDot_ — TypeNode of a computation graph used to represent the result of dot_()
ComputationGraphs.NodeExp_ — TypeNode of a computation graph used to represent the result of exp_()
ComputationGraphs.NodeExpandColumns — TypeNode of a computation graph used to represent the result of expandColumns()
ComputationGraphs.NodeExpandColumnsTimesAdjoint — TypeNode of a computation graph used to represent the result of expandColumnsTimesAdjoint()
ComputationGraphs.NodeExponentScalar — TypeNode of a computation graph used to represent the result of exponentScalar()
ComputationGraphs.NodeFindMaxRow — TypeNode of a computation graph used to represent the result of findMaxRow()
ComputationGraphs.NodeHeaviside — TypeNode of a computation graph used to represent the result of heaviside()
ComputationGraphs.NodeHuber — TypeNode of a computation graph used to represent the result of huber()
ComputationGraphs.NodeLogistic_ — TypeNode of a computation graph used to represent the result of logistic_()
ComputationGraphs.NodeMaxRow — TypeNode of a computation graph used to represent the result of maxRow()
ComputationGraphs.NodeMinus — TypeNode of a computation graph used to represent the result of minus()
ComputationGraphs.NodeNorm1 — TypeNode of a computation graph used to represent the result of norm1()
ComputationGraphs.NodeNorm2 — TypeNode of a computation graph used to represent the result of norm2()
ComputationGraphs.NodeOnes — TypeNode of a computation graph used to represent a constant equal to an array of ones. It is created by ones().
ComputationGraphs.NodePointDivide — TypeNode of a computation graph used to represent the result of pointDivide()
ComputationGraphs.NodePointTimes — TypeNode of a computation graph used to represent the result of pointTimes()
ComputationGraphs.NodeRelu — TypeNode of a computation graph used to represent the result of relu()
ComputationGraphs.NodeSat — TypeNode of a computation graph used to represent the result of sat()
ComputationGraphs.NodeScalarPlus — TypeNode of a computation graph used to represent the result of scalarPlus()
ComputationGraphs.NodeScalarTimes — TypeNode of a computation graph used to represent the result of scalarTimes()
ComputationGraphs.NodeSelectRows — TypeNode of a computation graph used to represent the result of selectRows()
ComputationGraphs.NodeScalarDivide — TypeNode of a computation graph used to represent the result of scalarDivide()
ComputationGraphs.NodeSign_ — TypeNode of a computation graph used to represent the result of sign_()
ComputationGraphs.NodeSqrt_ — TypeNode of a computation graph used to represent the result of sqrt_()
ComputationGraphs.NodeSubtract — TypeNode of a computation graph used to represent the result of subtract()
ComputationGraphs.NodeSumColumns — TypeNode of a computation graph used to represent the result of sumColumns()
ComputationGraphs.NodeSumExpandColumns — TypeNode of a computation graph used to represent the result of sumExpandColumns()
ComputationGraphs.NodeTimes — TypeNode of a computation graph used to represent the result of times()
ComputationGraphs.NodeTimesAdjoint — TypeNode of a computation graph used to represent the result of timesAdjoint()
ComputationGraphs.NodeTimesAdjointOnes — TypeNode of a computation graph used to represent the result of timesAdjointOnes()
ComputationGraphs.NodeUnitTimesAdjoint — TypeNode of a computation graph used to represent the result of unitTimesAdjoint()
ComputationGraphs.NodeUnitVector — TypeNode of a computation graph used to represent vectors of the canonical basis.
ComputationGraphs.NodeVariable — TypeNode of a computation graph used to represent a variable whose value can be directly set. It is created by variable().
ComputationGraphs.NodeZeros — TypeNode of a computation graph used to represent a constant equal to an array of zeros. It is created by zeros().
Graph evaluation
ComputationGraphs.generateComputeFunctions — FunctionGenerates a function that conditionally evaluates a node, using closure & enforcing type stability.
Each function will
- check if each parent need to be re-evaluated, if re-evaluates the parent and sets it's valid bit to true.
- always recomputes the function
- without checking if it is needed (this should be checked by caller, to enable force=true)
- without setting the valid bit), which is expected to be set by the calling function.
ComputationGraphs.compute_node! — Functioncompute_node!(node)
compute_node!(graph,node)
compute_node!(graph,id)Call the function generated by generateComputeFunction that computes a single node.
ComputationGraphs.compute_with_ancestors! — Functioncompute_with_ancestors!(node)
compute_with_ancestors!(graph,node)
compute_with_ancestors!(graph,id)Call the function generated by generateComputeFunction that computes a node and all its required parents.
Code generation
ComputationGraphs.nodes_str — FunctionCreate initialization code
ComputationGraphs.call_gs — FunctionCreate string to call function that does the computation
ComputationGraphs.compute_str_recursive — FunctionCreate code to compute nodes (for gets) [recursive functions]
ComputationGraphs.compute_str_unrolled — FunctionCreate code to compute nodes (for gets) [single function with nested if's]
ComputationGraphs.compute_str_parallel — FunctionCreate parallel code to recompute all nodes
API index
ComputationGraphs.AbstractConstantNodeComputationGraphs.AbstractNodeComputationGraphs.AbstractSpecialNodeComputationGraphs.CodeComputationGraphs.ComputationGraphComputationGraphs.NodeAdjointTimesComputationGraphs.NodeAdjointTimesExpandColumnsComputationGraphs.NodeAdjoint_ComputationGraphs.NodeAffineComputationGraphs.NodeAffineRowsComputationGraphs.NodeColumnComputationGraphs.NodeConstantComputationGraphs.NodeDdlogisticComputationGraphs.NodeDivideScalarComputationGraphs.NodeDlogisticComputationGraphs.NodeDot_ComputationGraphs.NodeExp_ComputationGraphs.NodeExpandColumnsComputationGraphs.NodeExpandColumnsTimesAdjointComputationGraphs.NodeExponentScalarComputationGraphs.NodeFindMaxRowComputationGraphs.NodeHeavisideComputationGraphs.NodeHuberComputationGraphs.NodeLogistic_ComputationGraphs.NodeMaxRowComputationGraphs.NodeMinusComputationGraphs.NodeNorm1ComputationGraphs.NodeNorm2ComputationGraphs.NodeOnesComputationGraphs.NodePlusComputationGraphs.NodePointDivideComputationGraphs.NodePointTimesComputationGraphs.NodeReluComputationGraphs.NodeSatComputationGraphs.NodeScalarDivideComputationGraphs.NodeScalarPlusComputationGraphs.NodeScalarTimesComputationGraphs.NodeSelectRowsComputationGraphs.NodeSign_ComputationGraphs.NodeSqrt_ComputationGraphs.NodeSubtractComputationGraphs.NodeSumColumnsComputationGraphs.NodeSumExpandColumnsComputationGraphs.NodeTimesComputationGraphs.NodeTimesAdjointComputationGraphs.NodeTimesAdjointOnesComputationGraphs.NodeUnitTimesAdjointComputationGraphs.NodeUnitVectorComputationGraphs.NodeVariableComputationGraphs.NodeZerosBase.:*Base.:+Base.:-Base.:^Base.Multimedia.displayBase.adjointBase.copyto!Base.eltypeBase.expBase.getBase.lengthBase.onesBase.push!Base.signBase.similarBase.sizeBase.sqrtBase.waitBase.zerosComputationGraphs.DComputationGraphs.adam!ComputationGraphs.add2childrenComputationGraphs.adjointTimesComputationGraphs.adjointTimesExpandColumnsComputationGraphs.adjoint_ComputationGraphs.affineComputationGraphs.affineRowsComputationGraphs.call_gsComputationGraphs.childrenComputationGraphs.columnComputationGraphs.compute!ComputationGraphs.computeSpawn!ComputationGraphs.computeUnspawn!ComputationGraphs.compute_node!ComputationGraphs.compute_str_parallelComputationGraphs.compute_str_recursiveComputationGraphs.compute_str_unrolledComputationGraphs.compute_with_ancestors!ComputationGraphs.computes!ComputationGraphs.constantComputationGraphs.copies!ComputationGraphs.ddlogisticComputationGraphs.denseChainComputationGraphs.denseChain!ComputationGraphs.denseChain_FluxEnzymeComputationGraphs.denseChain_FluxZygoteComputationGraphs.denseQlearningChainComputationGraphs.divideScalarComputationGraphs.dlogisticComputationGraphs.expandColumnsComputationGraphs.expandColumnsTimesAdjointComputationGraphs.exponentScalarComputationGraphs.findMaxRowComputationGraphs.generateComputeFunctionsComputationGraphs.gets!ComputationGraphs.gradDescent!ComputationGraphs.heavisideComputationGraphs.hessianComputationGraphs.huberComputationGraphs.maxRowComputationGraphs.memoryComputationGraphs.minusComputationGraphs.noComputationComputationGraphs.nodeValueComputationGraphs.nodesAndParentsComputationGraphs.nodes_strComputationGraphs.norm1ComputationGraphs.norm2ComputationGraphs.plusComputationGraphs.pointDivideComputationGraphs.pointTimesComputationGraphs.reluComputationGraphs.requestComputationGraphs.satComputationGraphs.scalarDivideComputationGraphs.scalarPlusComputationGraphs.scalarTimesComputationGraphs.selectRowsComputationGraphs.set!ComputationGraphs.sets!ComputationGraphs.subtractComputationGraphs.sumColumnsComputationGraphs.sumExpandColumnsComputationGraphs.syncValidComputationGraphs.timesComputationGraphs.timesAdjointComputationGraphs.timesAdjointOnesComputationGraphs.typeofvalueComputationGraphs.unitTimesAdjointComputationGraphs.unitvectorComputationGraphs.variableLogExpFunctions.logisticComputationGraphs.@addComputationGraphs.@newnode