Preamble
All the terms are given and to be understood in the scope of AI. Many of them, have several meanings. Do not get trapped by the meaning. Know what you aim to.
Underlying mathematical concepts

gradient
Vector containing the function variation, in regards to its different parameters wikipedia en, wikipedia fr 
Jacobian
Matrix holding the partial derivatives of a function
wikipedia en, wikipedia fr 
Hessian
Matrix holding second order partial derivatives of a function
wikipedia en, wikipedia fr 
nabla
Alias of gradient
Software engineering concepts

recycling, broadcasting
Each of these terms refers to the special ability to proceed to Ndimension matrix operations without strict compliance to mathematics requirements. For example, adding two matrices requires that they share exactly the same dimensions. Under software engineering, these features allows you to get rid of such conditions, and the release the strict conditions to enter a loose conditioning. This is done by converting the Ndimensions matrix to a vector and resizing it to the target length. Many checks are done and generally when lengths do not match, an error/warning is emitted by the software library in charge of the calculus. This is a very convenient way to handle mathematical operations on Ndimensions.
recycling it the official term used in R language, while broadcasting is the official term used in Python language. 
vectorization
It is the programming language ability to proceed to a Ndimensions operation(s) while coding it as a simple mathematical operation. To sum three integers, x, y, and z, you’ll writeresult < x + y + z
. If your programming language owns this capability, then, x, y, and z, could be scalars, vectors, matrix, or Ndimension arrays. The code shown above will provide the right result, whatever the inputs. Moreover, it will do so, considering recycling, broadcasting. Generally, vectorization ability is tied to operator overloading ability of the programming language.
Artificial Intelligence concepts
 AI algorithmic general pattern
AI software generally follows the following simple pattern compute forward propagation using the neural network model
 compute loss
 compute backward propagation
 update the model parameters

forward propagation
tbd. 
back propagation
Method to compute a gradient wikipedia en, wikipedia fr 
loss function
To compute loss, you will have to code a loss function, according to the model you are evaluating. Machine learning is the process that tends to reduce the loss in order to minimize𝓛(ŷ, y)

derivative computation methods
Back propagation phase in AI, is intimately tied to computation of derivatives in order to updates the model parameters. Several methods exists to do so, each of them bears advantages and limits. Here, you’ll find a quick summary of the most common derivative computation methods.
Mathematical derivative computation method
Given a mathematical function expression, you will use mathematical derivation techniques to compute the expression of the derivative, that you will code directly. This method requires to know the initial function expression and to be able to compute its derivative. Sometimes very complex to do so, but it always provides an exact result. 
Numerical differentiation computation method
This method approximates the derivative by computing the difference(f'(x + h)  f'(x)) / h
. The result is an approximation. 
Symbolic differentiation computation method
This method relies on expression manipulation. It allows expressions to be transformed, according to symbol calculus, change of variable, and mathematical rules. It provides and exact result. It is generally done using a specialized mathematical tool as Mathlab for example. 
Algorithmic differentiation computation method
This method uses an algorithm to evaluation the derivative. It computes during the forward propagation both the value and its derivative, and it appears that the algorithm is not only providing exact results, it is also computationnaly efficient.
This method is sometimes named auto differentiation or autodiff.
autodiff v4
